Sample records for nonconservative earthquake model

  1. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Marshall, J. A.

    1992-01-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  2. Nonconservative dynamics in long atomic wires

    NASA Astrophysics Data System (ADS)

    Cunningham, Brian; Todorov, Tchavdar N.; Dundas, Daniel

    2014-09-01

    The effect of nonconservative current-induced forces on the ions in a defect-free metallic nanowire is investigated using both steady-state calculations and dynamical simulations. Nonconservative forces were found to have a major influence on the ion dynamics in these systems, but their role in increasing the kinetic energy of the ions decreases with increasing system length. The results illustrate the importance of nonconservative effects in short nanowires and the scaling of these effects with system size. The dependence on bias and ion mass can be understood with the help of a simple pen and paper model. This material highlights the benefit of simple preliminary steady-state calculations in anticipating aspects of brute-force dynamical simulations, and provides rule of thumb criteria for the design of stable quantum wires.

  3. Gravity and Nonconservative Force Model Tuning for the GEOSAT Follow-On Spacecraft

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.; Chinn, Douglas S.; Marr, Gregory C.; Smith, David E. (Technical Monitor)

    2000-01-01

    The US Navy's GEOSAT Follow-On spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. Three radar altimeter calibration campaigns have been conducted in 1999 and 2000. The spacecraft is tracked by satellite laser ranging (SLR) and Doppler beacons and a limited amount of data have been obtained from the Global Positioning Receiver (GPS) on board the satellite. Even with EGM96, the predicted radial orbit error due to gravity field mismodelling (to 70x70) remains high at 2.61 cm (compared to 0.88 cm for TOPEX). We report on the preliminary gravity model tuning for GFO using SLR, and altimeter crossover data. Preliminary solutions using SLR and GFO/GFO crossover data from CalVal campaigns I and II in June-August 1999, and January-February 2000 have reduced the predicted radial orbit error to 1.9 cm and further reduction will be possible when additional data are added to the solutions. The gravity model tuning has improved principally the low order m-daily terms and has reduced significantly the geographically correlated error present in this satellite orbit. In addition to gravity field mismodelling, the largest contributor to the orbit error is the non-conservative force mismodelling. We report on further nonconservative force model tuning results using available data from over one cycle in beta prime.

  4. Constructive Memory in Conserving and Nonconserving First Graders

    ERIC Educational Resources Information Center

    Prawat, Richard S.; Cancelli, Anthony

    1976-01-01

    This study assessed the recognition by conserving and nonconserving first graders, of true and false permise and inference sentences following story presentations. Conservers performed slightly better than nonconservers on sentences other than true inference sentences, thus indicating that concrete mental operations are related to the process of…

  5. “SLIMPLECTIC” INTEGRATORS: VARIATIONAL INTEGRATORS FOR GENERAL NONCONSERVATIVE SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsang, David; Turner, Alec; Galley, Chad R.

    2015-08-10

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. Wemore » discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.« less

  6. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  7. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  8. Comprehensive modeling of microRNA targets predicts functional non-conserved and non-canonical sites.

    PubMed

    Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina

    2010-01-01

    mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.

  9. Nonconservative Lagrangian Mechanics: Purely Causal Equations of Motion

    NASA Astrophysics Data System (ADS)

    Dreisigmeyer, David W.; Young, Peter M.

    2015-06-01

    This work builds on the Volterra series formalism presented in Dreisigmeyer and Young (J Phys A 36: 8297, 2003) to model nonconservative systems. Here we treat Lagrangians and actions as `time dependent' Volterra series. We present a new family of kernels to be used in these Volterra series that allow us to derive a single retarded equation of motion using a variational principle.

  10. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  11. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  12. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  13. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  14. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  15. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  16. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  17. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  18. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the

  19. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  20. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  1. The Common Forces: Conservative or Nonconservative?

    ERIC Educational Resources Information Center

    Keeports, David

    2006-01-01

    Of the forces commonly encountered when solving problems in Newtonian mechanics, introductory texts usually limit illustrations of the definitions of conservative and nonconservative forces to gravity, spring forces, kinetic friction and fluid resistance. However, at the expense of very little class time, the question of whether each of the common…

  2. Noise induced aperiodic rotations of particles trapped by a non-conservative force

    NASA Astrophysics Data System (ADS)

    Ortega-Piwonka, Ignacio; Angstmann, Christopher N.; Henry, Bruce I.; Reece, Peter J.

    2018-04-01

    We describe a mechanism whereby random noise can play a constructive role in the manifestation of a pattern, aperiodic rotations, that would otherwise be damped by internal dynamics. The mechanism is described physically in a theoretical model of overdamped particle motion in two dimensions with symmetric damping and a non-conservative force field driven by noise. Cyclic motion only occurs as a result of stochastic noise in this system. However, the persistence of the cyclic motion is quantified by parameters associated with the non-conservative forcing. Unlike stochastic resonance or coherence resonance, where noise can play a constructive role in amplifying a signal that is otherwise below the threshold for detection, in the mechanism considered here, the signal that is detected does not exist without the noise. Moreover, the system described here is a linear system.

  3. An analytical model for non-conservative pollutants mixing in the surf zone.

    PubMed

    Ki, Seo Jin; Hwang, Jin Hwan; Kang, Joo-Hyon; Kim, Joon Ha

    2009-01-01

    Accurate simulation of the surf zone is a prerequisite to improve beach management as well as to understand the fundamentals of fate and transport of contaminants. In the present study, a diagnostic model modified from a classic solute model is provided to illuminate non-conservative pollutants behavior in the surf zone. To readily understand controlling processes in the surf zone, a new dimensionless quantity is employed with index of kappa number (K, a ratio of inactivation rate to transport rate of microbial pollutant in the surf zone), which was then evaluated under different environmental frames during a week simulation period. The sensitivity analysis showed that hydrodynamics and concentration gradients in the surf zone mostly depend on n (number of rip currents), indicating that n should be carefully adjusted in the model. The simulation results reveal, furthermore, that large deviation typically occurs in the daytime, signifying inactivation of fecal indicator bacteria is the main process to control surf zone water quality during the day. Overall, the analytical model shows a good agreement between predicted and synthetic data (R(2) = 0.51 and 0.67 for FC and ENT, respectively) for the simulated period, amplifying its potential use in the surf zone modelling. It is recommended that when the dimensionless index is much larger than 0.5, the present modified model can predict better than the conventional model, but if index is smaller than 0.5, the conventional model is more efficient with respect to time and cost.

  4. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  5. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    USGS Publications Warehouse

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  6. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  7. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  8. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  9. Nonconservative Forces via Quantum Reservoir Engineering

    NASA Astrophysics Data System (ADS)

    Vuglar, Shanon L.; Zhdanov, Dmitry V.; Cabrera, Renan; Seideman, Tamar; Jarzynski, Christopher; Bondar, Denys I.

    2018-06-01

    A systematic approach is given for engineering dissipative environments that steer quantum wave packets along desired trajectories. The methodology is demonstrated with several illustrative examples: environment-assisted tunneling, trapping, effective mass assignment, and pseudorelativistic behavior. Nonconservative stochastic forces do not inevitably lead to decoherence—we show that purity can be well preserved. These findings highlight the flexibility offered by nonequilibrium open quantum dynamics.

  10. The Global Earthquake Model - Past, Present, Future

    NASA Astrophysics Data System (ADS)

    Smolka, Anselm; Schneider, John; Stein, Ross

    2014-05-01

    The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic

  11. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  12. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  13. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  14. Justification of a "Crucial" Experiment: Parity Nonconservation.

    ERIC Educational Resources Information Center

    Franklin, Allan; Smokler, Howard

    1981-01-01

    Presents history, nature of evidence evaluated, and philosophical questions to justify the view that experiments on parity nonconservation were "crucial" experiments in the sense that they decided unambiguously and within a short period of time for the appropriate scientific community, between two or more competing theories or classes of theories.…

  15. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  16. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  17. First Results of the Regional Earthquake Likelihood Models Experiment

    NASA Astrophysics Data System (ADS)

    Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.

    2010-08-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

  18. Braneworld gravity within non-conservative gravitational theory

    NASA Astrophysics Data System (ADS)

    Fabris, J. C.; Caramês, Thiago R. P.; da Silva, J. M. Hoff

    2018-05-01

    We investigate the braneworld gravity starting from the non-conservative gravitational field equations in a five-dimensional bulk. The approach is based on the Gauss-Codazzi formalism along with the study of the braneworld consistency conditions. The effective gravitational equations on the brane are obtained and the constraint leading to a brane energy-momentum conservation is analyzed.

  19. Tracking channel-floodplain sediment exchange with conservative and non-conservative geochemical tracers

    NASA Astrophysics Data System (ADS)

    Belmont, Patrick; Stout, Justin

    2013-04-01

    Fine sediment is routed through landscapes and channel networks in a highly unsteady and non-uniform manner, potentially experiencing deposition and re-suspension many times during transport from source to sink. Developing a better understanding of sediment routing at the landscape scale is an intriguing challenge from a modeling perspective because it requires consideration of a multitude of processes that interact and vary in space and time. From an applied perspective, an improved understanding of sediment routing is essential for predicting how conservation and restoration practices within a watershed will influence water quality, to support land and water management decisions. Two key uncertainties in predicting sediment routing at the landscape scale are 1) determining the proportion of suspended sediment that is derived from terrestrial (soil) erosion versus channel (bank) erosion, and 2) constraining the proportion of sediment that is temporarily stored and re-suspended within the channel-floodplain complex. Sediment fingerprinting that utilizes a suite of conservative and non-conservative geochemical tracers associated with suspended sediment can provide insight regarding both of these key uncertainties. Here we present a model that tracks suspended sediment with associated conservative and non-conservative geochemical tracers. The model assumes that particle residence times are described by a bimodal distribution wherein some fraction of sediment is transported through the system in a relatively short time (< 1 year) and the remainder experiences temporary storage (of variable duration) within the channel-floodplain complex. We use the model to explore the downstream evolution of non-conservative tracers under equilibrium conditions (i.e., exchange between the channel and floodplain is allowed, but no net change in channel-floodplain storage can occur) to illustrate how the process of channel-floodplain storage and re-suspension can potentially bias

  20. Toward a comprehensive areal model of earthquake-induced landslides

    USGS Publications Warehouse

    Miles, S.B.; Keefer, D.K.

    2009-01-01

    This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.

  1. FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.

    USGS Publications Warehouse

    Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.

    1985-01-01

    The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.

  2. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  3. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  4. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  5. Seismic Moment, Seismic Energy, and Source Duration of Slow Earthquakes: Application of Brownian slow earthquake model to three major subduction zones

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Maury, Julie

    2018-04-01

    Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.

  6. Modelling the elements of country vulnerability to earthquake disasters.

    PubMed

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  7. Future WGCEP Models and the Need for Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2008-12-01

    The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).

  8. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes

  9. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  10. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  11. An improved GRACE monthly gravity field solution by modeling the non-conservative acceleration and attitude observation errors

    NASA Astrophysics Data System (ADS)

    Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze

    2016-06-01

    The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.

  12. Prospective Evaluation of the Global Earthquake Activity Rate Model (GEAR1) Earthquake Forecast: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schorlemmer, Danijel; Beutin, Thomas

    2017-04-01

    The Global Earthquake Activity Rate Model (GEAR1) is a hybrid seismicity model, constructed from a loglinear combination of smoothed seismicity from the Global Centroid Moment Tensor (CMT) earthquake catalog and geodetic strain rates (Global Strain Rate Map, version 2.1). For the 2005-2012 retrospective evaluation period, GEAR1 outperformed both parent strain rate and smoothed seismicity forecasts. Since 1. October 2015, GEAR1 has been prospectively evaluated by the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center. Here, we present initial one-year test results of the GEAR1, GSRM and GSRM2.1, as well as localized evaluation of GEAR1 performance. The models were evaluated on the consistency in number (N-test), spatial (S-test) and magnitude (M-test) distribution of forecasted and observed earthquakes, as well as overall data consistency (CL-, L-tests). Performance at target earthquake locations was compared between models using the classical paired T-test and its non-parametric equivalent, the W-test, to determine if one model could be rejected in favor of another at the 0.05 significance level. For the evaluation period from 1. October 2015 to 1. October 2016, the GEAR1, GSRM and GSRM2.1 forecasts pass all CSEP likelihood tests. Comparative test results show statistically significant improvement of GEAR1 performance over both strain rate-based forecasts, both of which can be rejected in favor of GEAR1. Using point process residual analysis, we investigate the spatial distribution of differences in GEAR1, GSRM and GSRM2 model performance, to identify regions where the GEAR1 model should be adjusted, that could not be inferred from CSEP test results. Furthermore, we investigate whether the optimal combination of smoothed seismicity and strain rates remains stable over space and time.

  13. Salient Features of the 2015 Gorkha, Nepal Earthquake in Relation to Earthquake Cycle and Dynamic Rupture Models

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.

    2015-12-01

    Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity

  14. GEM - The Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  15. Characteristics of broadband slow earthquakes explained by a Brownian model

    NASA Astrophysics Data System (ADS)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic

  16. Crossover from ballistic to normal heat transport in the ϕ4 lattice: If nonconservation of momentum is the reason, what is the mechanism?

    NASA Astrophysics Data System (ADS)

    Xiong, Daxing; Saadatmand, Danial; Dmitriev, Sergey V.

    2017-10-01

    Anomalous (non-Fourier) heat transport is no longer just a theoretical issue since it has been observed experimentally in a number of low-dimensional nanomaterials, such as SiGe nanowires, carbon nanotubes, and others. To understand these anomalous behaviors, exploring the microscopic origin of normal (Fourier) heat transport is a fascinating theoretical topic. However, this issue has not yet been fully understood even for one-dimensional (1D) model chains, in spite of a great amount of thorough studies done to date. From those studies, it has been widely accepted that the conservation of momentum is a key ingredient to induce anomalous heat transport, while momentum-nonconserving systems usually support normal heat transport where Fourier's law is valid. But if the nonconservation of momentum is the reason, what is the underlying microscopic mechanism for the observed normal heat transport? Here we carefully revisit a typical 1D momentum-nonconserving ϕ4 model, and we present evidence that the mobile discrete breathers, or, in other words, the moving intrinsic localized modes with frequency components above the linear phonon band, can be responsible for that.

  17. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.

  18. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  19. Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2013-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

  20. Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization

    NASA Astrophysics Data System (ADS)

    Lee, Kyungbook; Song, Seok Goo

    2017-09-01

    Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.

  1. Human casualties in earthquakes: Modelling and mitigation

    USGS Publications Warehouse

    Spence, R.J.S.; So, E.K.M.

    2011-01-01

    Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.

  2. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  3. Modeling the behavior of an earthquake base-isolated building.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coveney, V. A.; Jamil, S.; Johnson, D. E.

    1997-11-26

    Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.

  4. Evolution of wealth in a non-conservative economy driven by local Nash equilibria.

    PubMed

    Degond, Pierre; Liu, Jian-Guo; Ringhofer, Christian

    2014-11-13

    We develop a model for the evolution of wealth in a non-conservative economic environment, extending a theory developed in Degond et al. (2014 J. Stat. Phys. 154, 751-780 (doi:10.1007/s10955-013-0888-4)). The model considers a system of rational agents interacting in a game-theoretical framework. This evolution drives the dynamics of the agents in both wealth and economic configuration variables. The cost function is chosen to represent a risk-averse strategy of each agent. That is, the agent is more likely to interact with the market, the more predictable the market, and therefore the smaller its individual risk. This yields a kinetic equation for an effective single particle agent density with a Nash equilibrium serving as the local thermodynamic equilibrium. We consider a regime of scale separation where the large-scale dynamics is given by a hydrodynamic closure with this local equilibrium. A class of generalized collision invariants is developed to overcome the difficulty of the non-conservative property in the hydrodynamic closure derivation of the large-scale dynamics for the evolution of wealth distribution. The result is a system of gas dynamics-type equations for the density and average wealth of the agents on large scales. We recover the inverse Gamma distribution, which has been previously considered in the literature, as a local equilibrium for particular choices of the cost function. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Interevent times in a new alarm-based earthquake forecasting model

    NASA Astrophysics Data System (ADS)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  6. Nonconservative and reverse spectral transfer in Hasegawa-Mima turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, P.W.; Newman, D.E.

    1993-01-01

    The dual cascade is generally represented as a conservative cascade of enstrophy to short wavelengths through an enstrophy similarity range and an inverse cascade of energy to long wavelengths through an energy similarity range. This picture, based on a proof due to Kraichnan [Phys. Fluids 10, 1417 (1967)], is found to be significantly modified for a spectra of finite extent. Dimensional arguments and direct measurement of spectral flow in Hasegawa-Mima turbulence indicate that for both the energy and enstrophy cascades, transfer of the conserved quantity is accompanied by a nonconservative transfer of the other quantity. The decrease of a givenmore » invariant (energy or enstrophy) in the nonconservative transfer in one similarity range is balanced by the increase of that quantity in the other similarity range, thus maintaining net invariance. The increase or decrease of a given invariant quantity in one similarity range depends on the injection scale and is consistent with that quantity being carried in a self-similar transfer of the other invariant quantity. This leads, in an inertial range of finite size, to some energy being carried to small scales and some enstrophy being carried to large scales.« less

  7. Nonconservative and reverse spectral transfer in Hasegawa--Mima turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, P.W.; Newman, D.E.

    1993-07-01

    The dual cascade is generally represented as a conservative cascade of enstrophy to short wavelengths through an enstrophy similarity range and an inverse cascade of energy to long wavelengths through an energy similarity range. This picture, based on a proof due to Kraichnan [Phys. Fluids [bold 10], 1417 (1967)], is found to be significantly modified for spectra of finite extent. Dimensional arguments and direct measurement of spectral flow in Hasegawa--Mima turbulence indicate that for both the energy and enstrophy cascades, transfer of the conserved quantity is accompanied by a nonconservative transfer of the other quantity. The decrease of a givenmore » invariant (energy or enstrophy) in the nonconservative transfer in one similarity range is balanced by the increase of that quantity in the other similarity range, thus maintaining net invariance. The increase or decrease of a given invariant quantity in one similarity range depends on the injection scale and is consistent with that quantity being carried in a self-similar transfer of the other invariant quantity. This leads, in an inertial range of finite size, to some energy being carried to small scales and some enstrophy being carried to large scales.« less

  8. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  9. Modeling the Fluid Withdraw and Injection Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Meng, C.

    2016-12-01

    We present an open source numerical code, Defmod, that allows one to model the induced seismicity in an efficient and standalone manner. The fluid withdraw and injection induced earthquake has been a great concern to the industries including oil/gas, wastewater disposal and CO2 sequestration. Being able to numerically model the induced seismicity is long desired. To do that, one has to consider at lease two processes, a steady process that describes the inducing and aseismic stages before and in between the seismic events, and an abrupt process that describes the dynamic fault rupture accompanied by seismic energy radiations during the events. The steady process can be adequately modeled by a quasi-static model, while the abrupt process has to be modeled by a dynamic model. In most of the published modeling works, only one of these processes is considered. The geomechanicists and reservoir engineers are focused more on the quasi-static modeling, whereas the geophysicists and seismologists are focused more on the dynamic modeling. The finite element code Defmod combines these two models into a hybrid model that uses the failure criterion and frictional laws to adaptively switch between the (quasi-)static and dynamic states. The code is capable of modeling episodic fault rupture driven by quasi-static loading, e.g. due to reservoir fluid withdraw and/or injection, and by dynamic loading, e.g. due to the foregoing earthquakes. We demonstrate a case study for the 2013 Azle earthquake.

  10. Earthquake models using rate and state friction and fast multipoles

    NASA Astrophysics Data System (ADS)

    Tullis, T.

    2003-04-01

    The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior

  11. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  12. Precise Measurement of Parity Nonconserving Optical Rotation in Atomic Thallium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, N.H.; Phipp, S.J.; Baird, P.E.G.

    1995-04-03

    We report a new measurement of parity nonconserving (PNC) optical rotation on the 6{ital p}{sub 1/2}-6{ital p}{sub 3/2} transition in atomic thallium near 1283 nm. The result expressed in terms of the quantity R=Im{l_brace}{ital E}1{sup PNC}/{ital M}1{r_brace} is {minus}(15.68{plus_minus}0.45){times}10{sup {minus}8}, and is consistent with current calculations based on the standard model. In addition, limits have been set on the much smaller nuclear spin-dependent rotation amplitude at R{sub {ital S}}=(0.04{plus_minus}0.20){times}10{sup {minus}8}; this is consistent with theoretical estimates which include a nuclear anapole contribution.

  13. Source models of M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake by near-field tsunami modeling

    NASA Astrophysics Data System (ADS)

    Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.

    2012-12-01

    We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).

  14. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  15. Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.

    2012-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or

  16. Assessing a 3D smoothed seismicity model of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan

    2016-04-01

    As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.

  17. Redefining Earthquakes and the Earthquake Machine

    ERIC Educational Resources Information Center

    Hubenthal, Michael; Braile, Larry; Taber, John

    2008-01-01

    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  18. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  19. Modelling earthquake ruptures with dynamic off-fault damage

    NASA Astrophysics Data System (ADS)

    Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban

    2017-04-01

    Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for

  20. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the

  1. The Dynamics of Finite-Dimensional Systems Under Nonconservative Position Forces

    NASA Astrophysics Data System (ADS)

    Lobas, L. G.

    2001-01-01

    General theorems on the stability of stationary states of mechanical systems subjected to nonconservative position forces are presented. Specific mechanical problems on gyroscopic systems, a double-link pendulum with a follower force and elastically fixed upper tip, multilink pneumowheel vehicles, a monorail car, and rail-guided vehicles are analyzed. Methods for investigation of divergent bifurcations and catastrophes of stationary states are described

  2. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  3. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  4. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  5. Earthquake Hazard and Risk in Sub-Saharan Africa: current status of the Global Earthquake model (GEM) initiative in the region

    NASA Astrophysics Data System (ADS)

    Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray

    2013-04-01

    Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.

  6. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  7. Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie

    2014-05-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.

  8. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  9. ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1986-01-01

    A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.

  10. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  11. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.

    2017-09-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  12. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    USGS Publications Warehouse

    McNamara, Daniel E.; Yeck, William; Barnhart, William D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, Amod; Hough, S.E.; Benz, Harley M.; Earle, Paul

    2017-01-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard.Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a ~ 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10–15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  13. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.

    2008-12-01

    Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.

  14. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    NASA Astrophysics Data System (ADS)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  15. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  16. Application of a long-range forecasting model to earthquakes in the Japan mainland testing region

    NASA Astrophysics Data System (ADS)

    Rhoades, David A.

    2011-03-01

    The Every Earthquake a Precursor According to Scale (EEPAS) model is a long-range forecasting method which has been previously applied to a number of regions, including Japan. The Collaboratory for the Study of Earthquake Predictability (CSEP) forecasting experiment in Japan provides an opportunity to test the model at lower magnitudes than previously and to compare it with other competing models. The model sums contributions to the rate density from past earthquakes based on predictive scaling relations derived from the precursory scale increase phenomenon. Two features of the earthquake catalogue in the Japan mainland region create difficulties in applying the model, namely magnitude-dependence in the proportion of aftershocks and in the Gutenberg-Richter b-value. To accommodate these features, the model was fitted separately to earthquakes in three different target magnitude classes over the period 2000-2009. There are some substantial unexplained differences in parameters between classes, but the time and magnitude distributions of the individual earthquake contributions are such that the model is suitable for three-month testing at M ≥ 4 and for one-year testing at M ≥ 5. In retrospective analyses, the mean probability gain of the EEPAS model over a spatially smoothed seismicity model increases with magnitude. The same trend is expected in prospective testing. The Proximity to Past Earthquakes (PPE) model has been submitted to the same testing classes as the EEPAS model. Its role is that of a spatially-smoothed reference model, against which the performance of time-varying models can be compared.

  17. Short- and Long-Term Earthquake Forecasts Based on Statistical Models

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner

    2017-04-01

    The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.

  18. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  19. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  20. PAGER-CAT: A composite earthquake catalog for calibrating global fatality models

    USGS Publications Warehouse

    Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.

    2009-01-01

    We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are

  1. Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography

    NASA Astrophysics Data System (ADS)

    Jousset, Philippe; Neuberg, Jürgen; Jolly, Arthur

    2004-11-01

    Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on magma properties and rheology and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2-D finite-difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a homogeneous viscoelastic medium with topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid (SLS) for seismic frequencies above 2 Hz. Results demonstrate that attenuation modifies both amplitudes and dispersive characteristics of low-frequency earthquakes. Low frequency volcanic earthquakes are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of the seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.

  2. Laboratory constraints on models of earthquake recurrence

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian; Goldsby, David

    2014-12-01

    In this study, rock friction "stick-slip" experiments are used to develop constraints on models of earthquake recurrence. Constant rate loading of bare rock surfaces in high-quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip-rate-dependent process that also determines the size of the stress drop and, as a consequence, stress drop varies weakly but systematically with loading rate. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. The experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a nonlinear slip predictable model. The fault's rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence covary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability, and successive stress drops are strongly correlated indicating a "memory" of prior slip history that extends over at least one recurrence cycle.

  3. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  4. Numerical Modeling and Forecasting of Strong Sumatra Earthquakes

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Yin, C.

    2007-12-01

    ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.

  5. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  6. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  7. Finite element models of earthquake cycles in mature strike-slip fault zones

    NASA Astrophysics Data System (ADS)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a

  8. Modeling temporal changes of low-frequency earthquake bursts near Parkfield, CA

    NASA Astrophysics Data System (ADS)

    Wu, C.; Daub, E. G.

    2016-12-01

    Tectonic tremor and low-frequency earthquakes (LFE) are found in the deeper crust of various tectonic environments in the last decade. LFEs are presumed to be caused by failure of deep fault patches during a slow slip event, and the long-term variation in LFE recurrence could provide crucial insight into the deep fault zone processes that may lead to future large earthquakes. However, the physical mechanisms causing the temporal changes of LFE recurrence are still under debate. In this study, we combine observations of long-term changes in LFE burst activities near Parkfield, CA with a brittle and ductile friction (BDF) model, and use the model to constrain the possible physical mechanisms causing the observed long-term changes in LFE burst activities after the 2004 M6 Parkfield earthquake. The BDF model mimics the slipping of deep fault patches by a spring-drugged block slider with both brittle and ductile friction components. We use the BDF model to test possible mechanisms including static stress imposed by the Parkfield earthquake, changes in pore pressure, tectonic force, afterslip, brittle friction strength, and brittle contact failure distance. The simulation results suggest that changes in brittle friction strength and failure distance are more likely to cause the observed changes in LFE bursts than other mechanisms.

  9. Stabilizing and destabilizing effects of damping in non-conservative systems: Some new results

    NASA Astrophysics Data System (ADS)

    Abdullatif, Mahmoud; Mukherjee, Ranjan; Hellum, Aren

    2018-01-01

    Previous work has amply demonstrated that non-conservative systems can be made unstable by the application of damping. Systems with two neutrally-stable damping levels, whereby the system initially gains stability but later loses stability as the level of damping is increased, have also been observed. The phenomenon of three damping-induced stability transitions has not been reported in the literature. Here we show that the addition of damping can cause non-conservative systems to become stable, then unstable, then stable again at the same value of the non-conservative forcing variable. This combination of stability transitions is found to exist for several example systems, including linkages with follower end forces and fluid-conveying pipes. Another interesting observation is that a given system can exhibit different forms of stability transitions in different regions of its parameter space. In a particular example, the neutral stability curves corresponding to two different modes are observed to intersect, such that the boundary separating the stable and unstable regions is piecewise continuous. This observation requires that the accepted definitions of "stabilizing" and "destabilizing" roles of damping be revised. All of these stability transition behaviors were found by applying the Routh-Hurwitz procedure, whereby the traditional procedure is first applied to the characteristic polynomial of the system, and then again to guarantee the existence of a second-order auxiliary polynomial in the Routh array. This procedure is developed in the context of examples, each of which concerns a classical apparatus who dynamics are more interesting than previously believed.

  10. Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi

    2018-05-01

    Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.

  11. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  12. Italian Case Studies Modelling Complex Earthquake Sources In PSHA

    NASA Astrophysics Data System (ADS)

    Gee, Robin; Peruzza, Laura; Pagani, Marco

    2017-04-01

    This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data

  13. Viscoelastic shear zone model of a strike-slip earthquake cycle

    USGS Publications Warehouse

    Pollitz, F.F.

    2001-01-01

    I examine the behavior of a two-dimensional (2-D) strike-slip fault system embedded in a 1-D elastic layer (schizosphere) overlying a uniform viscoelastic half-space (plastosphere) and within the boundaries of a finite width shear zone. The viscoelastic coupling model of Savage and Prescott [1978] considers the viscoelastic response of this system, in the absence of the shear zone boundaries, to an earthquake occurring within the upper elastic layer, steady slip beneath a prescribed depth, and the superposition of the responses of multiple earthquakes with characteristic slip occurring at regular intervals. So formulated, the viscoelastic coupling model predicts that sufficiently long after initiation of the system, (1) average fault-parallel velocity at any point is the average slip rate of that side of the fault and (2) far-field velocities equal the same constant rate. Because of the sensitivity to the mechanical properties of the schizosphere-plastosphere system (i.e., elastic layer thickness, plastosphere viscosity), this model has been used to infer such properties from measurements of interseismic velocity. Such inferences exploit the predicted behavior at a known time within the earthquake cycle. By modifying the viscoelastic coupling model to satisfy the additional constraint that the absolute velocity at prescribed shear zone boundaries is constant, I find that even though the time-averaged behavior remains the same, the spatiotemporal pattern of surface deformation (particularly its temporal variation within an earthquake cycle) is markedly different from that predicted by the conventional viscoelastic coupling model. These differences are magnified as plastosphere viscosity is reduced or as the recurrence interval of periodic earthquakes is lengthened. Application to the interseismic velocity field along the Mojave section of the San Andreas fault suggests that the region behaves mechanically like a ???600-km-wide shear zone accommodating 50 mm/yr fault

  14. Evaluation of CAMEL - comprehensive areal model of earthquake-induced landslides

    USGS Publications Warehouse

    Miles, S.B.; Keefer, D.K.

    2009-01-01

    A new comprehensive areal model of earthquake-induced landslides (CAMEL) has been developed to assist in planning decisions related to disaster risk reduction. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using fuzzy logic systems and geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL has been empirically evaluated with respect to disrupted landslides (Category I) using a case study of the 1989 M = 6.9 Loma Prieta, CA earthquake. In this case, CAMEL performs best in comparison to disrupted slides and falls in soil. For disrupted rock fall and slides, CAMEL's performance was slightly poorer. The model predicted a low occurrence of rock avalanches, when none in fact occurred. A similar comparison with the Loma Prieta case study was also conducted using a simplified Newmark displacement model. The area under the curve method of evaluation was used in order to draw comparisons between both models, revealing improved performance with CAMEL. CAMEL should not however be viewed as a strict alternative to Newmark displacement models. CAMEL can be used to integrate Newmark displacements with other, previously incompatible, types of knowledge. ?? 2008 Elsevier B.V.

  15. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  16. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  17. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  18. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  19. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  20. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  1. Large Occurrence Patterns of New Zealand Deep Earthquakes: Characterization by Use of a Switching Poisson Model

    NASA Astrophysics Data System (ADS)

    Shaochuan, Lu; Vere-Jones, David

    2011-10-01

    The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.

  2. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  3. Modeling Crustal Deformation Due to the Landers, Hector Mine Earthquakes Using the SCEC Community Fault Model

    NASA Astrophysics Data System (ADS)

    Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.

    2006-12-01

    More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.

  4. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  5. M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

    USGS Publications Warehouse

    Parsons, Thomas E.

    2006-01-01

     Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.

  6. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  7. Combined GPS and InSAR models of postseismic deformation from the Northridge Earthquake

    NASA Technical Reports Server (NTRS)

    Donnellan, A.; Parker, J. W.; Peltzer, G.

    2002-01-01

    Models of combined Global Positioning System and Interferometric Synthetic Aperture Radar data collected in the region of the Northridge earthquake indicate that significant afterslip on the main fault occurred following the earthquake.

  8. Spatial modeling for estimation of earthquakes economic loss in West Java

    NASA Astrophysics Data System (ADS)

    Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma

    2017-07-01

    Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.

  9. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may

  10. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  11. Simple Physical Model for the Probability of a Subduction- Zone Earthquake Following Slow Slip Events and Earthquakes: Application to the Hikurangi Megathrust, New Zealand

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.

    2018-05-01

    Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.

  12. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  13. An Earthquake Rupture Forecast model for central Italy submitted to CSEP project

    NASA Astrophysics Data System (ADS)

    Pace, B.; Peruzza, L.

    2009-04-01

    We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5earthquakes by Brownian passage time distribution. Beside the original model, updated earthquake rupture forecasts only for individual sources are released too, in the light of recent analyses (Peruzza et al., 2008; Zoeller et al., 2008). We computed forecasts based on the LaSS-CI model for two time-windows: 5 and 10 years. Each model to be tested defines a forecasted earthquake rate in magnitude bins of 0.1 unit steps in the range M5-9, for the periods 1st April 2009 to 1st April 2014, and 1st April 2009 to 1st April 2019. B. Pace, L. Peruzza, G. Lavecchia, and P. Boncio (2006) Layered Seismogenic Source

  14. Viscoelastic-coupling model for the earthquake cycle driven from below

    USGS Publications Warehouse

    Savage, J.C.

    2000-01-01

    In a linear system the earthquake cycle can be represented as the sum of a solution which reproduces the earthquake cycle itself (viscoelastic-coupling model) and a solution that provides the driving force. We consider two cases, one in which the earthquake cycle is driven by stresses transmitted along the schizosphere and a second in which the cycle is driven from below by stresses transmitted along the upper mantle (i.e., the schizosphere and upper mantle, respectively, act as stress guides in the lithosphere). In both cases the driving stress is attributed to steady motion of the stress guide, and the upper crust is assumed to be elastic. The surface deformation that accumulates during the interseismic interval depends solely upon the earthquake-cycle solution (viscoelastic-coupling model) not upon the driving source solution. Thus geodetic observations of interseismic deformation are insensitive to the source of the driving forces in a linear system. In particular, the suggestion of Bourne et al. [1998] that the deformation that accumulates across a transform fault system in the interseismic interval is a replica of the deformation that accumulates in the upper mantle during the same interval does not appear to be correct for linear systems.

  15. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  16. Parity Nonconservation in Proton-water Scattering at 800 MeV

    DOE R&D Accomplishments Database

    Nagle, D. E.; Bowman, J. D.; Carlini, R.; Mischke, R. E.; Frauenfelder, H.; Harper, R. W.; Yuan, V.; McDonald, A. B.; Talaga, R.

    1982-01-01

    A search has been made for parity nonconservation in the scattering of 800 MeV polarized protons from an unpolarized water target. The result is for the longitudinal asymmetry, A{sub L} = +(6.6 +- 3.2) x 10{sup -7}. Control runs with Pb, using a thickness which gave equivalent beam broadening from Coulomb multiple scattering, but a factor of ten less nuclear interactions than the water target, gave A{sub L} = -(0.5 +- 6.0) x 10{sup -7}.

  17. Optimized volume models of earthquake-triggered landslides

    PubMed Central

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  18. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-12

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  19. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  20. Inferring rupture characteristics using new databases for 3D slab geometry and earthquake rupture models

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Plescia, S. M.; Moore, G.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center has recently published a database of finite fault models for globally distributed M7.5+ earthquakes since 1990. Concurrently, we have also compiled a database of three-dimensional slab geometry models for all global subduction zones, to update and replace Slab1.0. Here, we use these two new and valuable resources to infer characteristics of earthquake rupture and propagation in subduction zones, where the vast majority of large-to-great-sized earthquakes occur. For example, we can test questions that are fairly prevalent in seismological literature. Do large ruptures preferentially occur where subduction zones are flat (e.g., Bletery et al., 2016)? Can `flatness' be mapped to understand and quantify earthquake potential? Do the ends of ruptures correlate with significant changes in slab geometry, and/or bathymetric features entering the subduction zone? Do local subduction zone geometry changes spatially correlate with areas of low slip in rupture models (e.g., Moreno et al., 2012)? Is there a correlation between average seismogenic zone dip, and/or seismogenic zone width, and earthquake size? (e.g., Hayes et al., 2012; Heuret et al., 2011). These issues are fundamental to the understanding of earthquake rupture dynamics and subduction zone seismogenesis, and yet many are poorly understood or are still debated in scientific literature. We attempt to address these questions and similar issues in this presentation, and show how these models can be used to improve our understanding of earthquake hazard in subduction zones.

  1. A combined geodetic and seismic model for the Mw 8.3 2015 Illapel (Chile) earthquake

    NASA Astrophysics Data System (ADS)

    Simons, M.; Duputel, Z.; Jiang, J.; Liang, C.; Fielding, E. J.; Agram, P. S.; Owen, S. E.; Moore, A. W.; Kanamori, H.; Rivera, L. A.; Riel, B. V.; Ortega, F.

    2015-12-01

    The 2015 September 16 Mw 8.3 Illapel earthquake occurred on the subduction megathrust offshore of the Chilean coastline between the towns of Valparaiso and Coquimbo. This earthquake is the 3rdevent with Mw>8 to impact coastal Chile in the last 6 years. It occurred north of both the 2010 Mw 8.8 Maule earthquake and the 1985 Mw 8.0 Valparaiso earthquake. While the location of the 2015 earthquake is close to the inferred location of a large earthquake in 1943, comparison of seismograms from the two earthquakes suggests the recent event is not clearly a repeat of the 1943 event. To gain a better understanding of the distribution of coseismic fault slip, we develop a finite fault model that is constrained by a combination of static GPS offsets, Sentinel 1a ascending and descending radar interferograms, tsunami waveform measurements made at selected DART buoys, high rate (1 sample/sec) GPS waveforms and strong motion seismic data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the assumed forward models. At the inherent resolution of the model, the posterior ensemble of purely static models (without using high rate GPS and strong motion data) is characterized by a distribution of slip that reaches as much as 10 m in localized regions, with significant slip apparently reaching the trench or at least very close to the trench. Based on our W-phase point-source estimate, the event duration is approximately 1.7 minutes. We also present a joint kinematic model and describe the relationship of the coseismic model to the spatial distribution of aftershocks and post-seismic slip.

  2. Laboratory constraints on models of earthquake recurrence

    USGS Publications Warehouse

    Beeler, Nicholas M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian D.; Goldsby, David L.

    2014-01-01

    In this study, rock friction ‘stick-slip’ experiments are used to develop constraints on models of earthquake recurrence. Constant-rate loading of bare rock surfaces in high quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip rate-dependent process that also determines the size of the stress drop [Dieterich, 1979; Ruina, 1983] and as a consequence, stress drop varies weakly but systematically with loading rate [e.g., Gu and Wong, 1991; Karner and Marone, 2000; McLaskey et al., 2012]. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. As follows from the previous studies referred to above, experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a non-linear slip-predictable model. The fault’s rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence co-vary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability and successive stress drops are strongly correlated indicating a ‘memory’ of prior slip history that extends over at least one recurrence cycle.

  3. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  4. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  5. Earthquake recurrence models and occurrence probabilities of strong earthquakes in the North Aegean Trough (Greece)

    NASA Astrophysics Data System (ADS)

    Christos, Kourouklas; Eleftheria, Papadimitriou; George, Tsaklidis; Vassilios, Karakostas

    2018-06-01

    The determination of strong earthquakes' recurrence time above a predefined magnitude, associated with specific fault segments, is an important component of seismic hazard assessment. The occurrence of these earthquakes is neither periodic nor completely random but often clustered in time. This fact in connection with their limited number, due to shortage of the available catalogs, inhibits a deterministic approach for recurrence time calculation, and for this reason, application of stochastic processes is required. In this study, recurrence time determination in the area of North Aegean Trough (NAT) is developed by the application of time-dependent stochastic models, introducing an elastic rebound motivated concept for individual fault segments located in the study area. For this purpose, all the available information on strong earthquakes (historical and instrumental) with M w ≥ 6.5 is compiled and examined for magnitude completeness. Two possible starting dates of the catalog are assumed with the same magnitude threshold, M w ≥ 6.5 and divided into five data sets, according to a new segmentation model for the study area. Three Brownian Passage Time (BPT) models with different levels of aperiodicity are applied and evaluated with the Anderson-Darling test for each segment in both catalog data where possible. The preferable models are then used in order to estimate the occurrence probabilities of M w ≥ 6.5 shocks on each segment of NAT for the next 10, 20, and 30 years since 01/01/2016. Uncertainties in probability calculations are also estimated using a Monte Carlo procedure. It must be mentioned that the provided results should be treated carefully because of their dependence to the initial assumptions. Such assumptions exhibit large variability and alternative means of these may return different final results.

  6. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  7. Effects of a Non-Conservative Sequence on the Properties of β-glucuronidase from Aspergillus terreus Li-20

    PubMed Central

    Liu, Yanli; Huangfu, Jie; Qi, Feng; Kaleem, Imdad; E, Wenwen; Li, Chun

    2012-01-01

    We cloned the β-glucuronidase gene (AtGUS) from Aspergillus terreus Li-20 encoding 657 amino acids (aa), which can transform glycyrrhizin into glycyrrhetinic acid monoglucuronide (GAMG) and glycyrrhetinic acid (GA). Based on sequence alignment, the C-terminal non-conservative sequence showed low identity with those of other species; thus, the partial sequence AtGUS(-3t) (1–592 aa) was amplified to determine the effects of the non-conservative sequence on the enzymatic properties. AtGUS and AtGUS(-3t) were expressed in E. coli BL21, producing AtGUS-E and AtGUS(-3t)-E, respectively. At the similar optimum temperature (55°C) and pH (AtGUS-E, 6.6; AtGUS(-3t)-E, 7.0) conditions, the thermal stability of AtGUS(-3t)-E was enhanced at 65°C, and the metal ions Co2+, Ca2+ and Ni2+ showed opposite effects on AtGUS-E and AtGUS(-3t)-E, respectively. Furthermore, Km of AtGUS(-3t)-E (1.95 mM) was just nearly one-seventh that of AtGUS-E (12.9 mM), whereas the catalytic efficiency of AtGUS(-3t)-E was 3.2 fold higher than that of AtGUS-E (7.16 vs. 2.24 mM s−1), revealing that the truncation of non-conservative sequence can significantly improve the catalytic efficiency of AtGUS. Conformational analysis illustrated significant difference in the secondary structure between AtGUS-E and AtGUS(-3t)-E by circular dichroism (CD). The results showed that the truncation of the non-conservative sequence could preferably alter and influence the stability and catalytic efficiency of enzyme. PMID:22347419

  8. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  9. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  10. Scattering of glue by glue on the light-cone worldsheet: Helicity nonconserving amplitudes

    NASA Astrophysics Data System (ADS)

    Chakrabarti, D.; Qiu, J.; Thorn, C. B.

    2005-09-01

    We give the light-cone gauge calculation of the one-loop on-shell scattering amplitudes for gluon-gluon scattering which violate helicity conservation. We regulate infrared divergences by discretizing the p+ integrations, omitting the terms with p+=0. Collinear divergences are absent diagram by diagram for the helicity nonconserving amplitudes. We also employ a novel ultraviolet regulator that is natural for the light-cone worldsheet description of planar Feynman diagrams. We show that these regulators give the known answers for the helicity nonconserving one-loop amplitudes, which do not suffer from the usual infrared vagaries of massless particle scattering. For the maximal helicity violating process we elucidate the physics of the remarkable fact that the loop momentum integrand for the on-shell Green function associated with this process, with a suitable momentum routing of the different contributing topologies, is identically zero. We enumerate the counterterms that must be included to give Lorentz covariant results to this order, and we show that they can be described locally in the light-cone worldsheet formulation of the sum of planar diagrams.

  11. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  12. REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, W R; Pasyanos, M E; Matzel, E

    2008-07-08

    We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling showsmore » strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing

  13. Dynamic models of an earthquake and tsunami offshore Ventura, California

    USGS Publications Warehouse

    Kenny J. Ryan,; Geist, Eric L.; Barall, Michael; David D. Oglesby,

    2015-01-01

    The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.

  14. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  15. The Global Earthquake Model and Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Smolka, A. J.

    2015-12-01

    Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all

  16. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  17. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  18. Improving vulnerability models: lessons learned from a comparison between flood and earthquake assessments

    NASA Astrophysics Data System (ADS)

    de Ruiter, Marleen; Ward, Philip; Daniell, James; Aerts, Jeroen

    2017-04-01

    In a cross-discipline study, an extensive literature review has been conducted to increase the understanding of vulnerability indicators used in both earthquake- and flood vulnerability assessments, and to provide insights into potential improvements of earthquake and flood vulnerability assessments. It identifies and compares indicators used to quantitatively assess earthquake and flood vulnerability, and discusses their respective differences and similarities. Indicators have been categorized into Physical- and Social categories, and further subdivided into (when possible) measurable and comparable indicators. Physical vulnerability indicators have been differentiated to exposed assets such as buildings and infrastructure. Social indicators are grouped in subcategories such as demographics, economics and awareness. Next, two different vulnerability model types have been described that use these indicators: index- and curve-based vulnerability models. A selection of these models (e.g. HAZUS) have been described, and compared on several characteristics such as temporal- and spatial aspects. It appears that earthquake vulnerability methods are traditionally strongly developed towards physical attributes at an object scale and used in vulnerability curve models, whereas flood vulnerability studies focus more on indicators applied to aggregated land-use scales. Flood risk studies could be improved using approaches from earthquake studies, such as incorporating more detailed lifeline and building indicators, and developing object-based vulnerability curve assessments of physical vulnerability, for example by defining building material based flood vulnerability curves. Related to this, is the incorporation of time of the day based building occupation patterns (at 2am most people will be at home while at 2pm most people will be in the office). Earthquake assessments could learn from flood studies when it comes to the refined selection of social vulnerability indicators

  19. Sensitivity analysis of the FEMA HAZUS-MH MR4 Earthquake Model using seismic events affecting King County Washington

    NASA Astrophysics Data System (ADS)

    Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.

    2010-12-01

    HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region

  20. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  1. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    NASA Astrophysics Data System (ADS)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  2. Transportations Systems Modeling and Applications in Earthquake Engineering

    DTIC Science & Technology

    2010-07-01

    49 Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g)............... 50...Memphis, Tennessee. The NMSZ was responsible for the devastating 1811-1812 New Madrid earthquakes , the largest earthquakes ever recorded in the...Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g) Table 1 Fragility parameters for MSC steel bridge (Padgett 2007

  3. Influence of the Dirac-Hartree-Fock starting potential on the parity-nonconserving electric-dipole-transition amplitudes in cesium and thallium

    NASA Technical Reports Server (NTRS)

    Perger, W. F.; Das, B. P.

    1987-01-01

    The parity-nonconserving electric-dipole-transition amplitudes for the 6s1/2-7s1/2 transition in cesium and the 6p1/2-7p1/2 transition in thallium have been calculated by the Dirac-Hartree-Fock method. The effects of using different Dirac-Hartree-Fock atomic core potentials are examined and the transition amplitudes for both the length and velocity gauges are given. It is found that the parity-nonconserving transition amplitudes exhibit a greater dependence on the starting potential for thallium than for cesium.

  4. Phase segregation and spontaneous symmetry breaking in a bidirectional two-channel non-conserving model with narrow entrances

    NASA Astrophysics Data System (ADS)

    Sharma, Natasha; Gupta, A. K.

    2017-04-01

    Motivated by connections between the inputs and outputs of several transport mechanisms and multi-species functionalities, we studied an open system of a two-species totally asymmetric simple exclusion process with narrow entrances, which assimilate the synergy of the particles with the surrounding environment through Langmuir kinetics (LK). We analyzed the model within the framework of mean-field theory, and examined complex phenomena such as boundary-induced phase transitions and spontaneous symmetry breaking for variant conditions of attachment and detachment rates. Based on the theoretical investigations we obtained the phase boundaries for various symmetric and asymmetric phases. Our finding displays a prolific behavior, highlighting the significant effect of LK rates on symmetry breaking. It is found that for lower orders of LK rates, the number of symmetrical and asymmetrical phases increases notably, while for their higher orders symmetry breaking disappears, revealing that the presence of bulk non-conserving processes can resume/break the uniformity between two species. The critical value of LK rates beyond which the asymmetrical phases disappears is identified. The theoretical findings are explored by extensive Monte Carlo simulations. The effect of the system size and symmetry breaking incident on the Monte Carlo simulation results has also been examined based on particle density histograms.

  5. Coseismic source model of the 2003 Mw 6.8 Chengkung earthquake, Taiwan, determined from GPS measurements

    USGS Publications Warehouse

    Ching, K.-E.; Rau, R.-J.; Zeng, Y.

    2007-01-01

    A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.

  6. Models of recurrent strike-slip earthquake cycles and the state of crustal stress

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.

    1991-01-01

    Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.

  7. Benchmark calculations of nonconservative charged-particle swarms in dc electric and magnetic fields crossed at arbitrary angles.

    PubMed

    Dujko, S; White, R D; Petrović, Z Lj; Robson, R E

    2010-04-01

    A multiterm solution of the Boltzmann equation has been developed and used to calculate transport coefficients of charged-particle swarms in gases under the influence of electric and magnetic fields crossed at arbitrary angles when nonconservative collisions are present. The hierarchy resulting from a spherical-harmonic decomposition of the Boltzmann equation in the hydrodynamic regime is solved numerically by representing the speed dependence of the phase-space distribution function in terms of an expansion in Sonine polynomials about a Maxwellian velocity distribution at an internally determined temperature. Results are given for electron swarms in certain collisional models for ionization and attachment over a range of angles between the fields and field strengths. The implicit and explicit effects of ionization and attachment on the electron-transport coefficients are considered using physical arguments. It is found that the difference between the two sets of transport coefficients, bulk and flux, resulting from the explicit effects of nonconservative collisions, can be controlled either by the variation in the magnetic field strengths or by the angles between the fields. In addition, it is shown that the phenomena of ionization cooling and/or attachment cooling/heating previously reported for dc electric fields carry over directly to the crossed electric and magnetic fields. The results of the Boltzmann equation analysis are compared with those obtained by a Monte Carlo simulation technique. The comparison confirms the theoretical basis and numerical integrity of the moment method for solving the Boltzmann equation and gives a set of well-established data that can be used to test future codes and plasma models.

  8. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  9. Non-conserved magnetization operator and 'fire-and-ice' ground states in the Ising-Heisenberg diamond chain

    NASA Astrophysics Data System (ADS)

    Torrico, Jordana; Ohanyan, Vadim; Rojas, Onofre

    2018-05-01

    We consider the diamond chain with S = 1/2 XYZ vertical dimers which interact with the intermediate sites via the interaction of the Ising type. We also suppose all four spins form the diamond-shaped plaquette to have different g-factors. The non-uniform g-factors within the quantum spin dimer as well as the XY-anisotropy of the exchange interaction lead to the non-conserving magnetization for the chain. We analyze the effects of non-conserving magnetization as well as the effects of the appearance of negative g-factors among the spins from the unit cell. A number of unusual frustrated states for ferromagnetic couplings and g-factors with non-uniform signs are found out. These frustrated states generalize the "half-fire-half-ice" state introduced in reference Yin et al. (2015). The corresponding zero-temperature ground state phase diagrams are presented.

  10. Modified two-layer social force model for emergency earthquake evacuation

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi

    2018-02-01

    Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force model is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian earthquake in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in earthquake evacuation is simulated and reproduced. Comparing with the fundamental social force model and existing guided crowd model, the modified model reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.

  11. Modeling earthquake rate changes in Oklahoma and Arkansas: possible signatures of induced seismicity

    USGS Publications Warehouse

    Llenos, Andrea L.; Michael, Andrew J.

    2013-01-01

    The rate of ML≥3 earthquakes in the central and eastern United States increased beginning in 2009, particularly in Oklahoma and central Arkansas, where fluid injection has occurred. We find evidence that suggests these rate increases are man‐made by examining the rate changes in a catalog of ML≥3 earthquakes in Oklahoma, which had a low background seismicity rate before 2009, as well as rate changes in a catalog of ML≥2.2 earthquakes in central Arkansas, which had a history of earthquake swarms prior to the start of injection in 2009. In both cases, stochastic epidemic‐type aftershock sequence models and statistical tests demonstrate that the earthquake rate change is statistically significant, and both the background rate of independent earthquakes and the aftershock productivity must increase in 2009 to explain the observed increase in seismicity. This suggests that a significant change in the underlying triggering process occurred. Both parameters vary, even when comparing natural to potentially induced swarms in Arkansas, which suggests that changes in both the background rate and the aftershock productivity may provide a way to distinguish man‐made from natural earthquake rate changes. In Arkansas we also compare earthquake and injection well locations, finding that earthquakes within 6 km of an active injection well tend to occur closer together than those that occur before, after, or far from active injection. Thus, like a change in productivity, a change in interevent distance distribution may also be an indicator of induced seismicity.

  12. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  13. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  14. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  15. Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography

    NASA Astrophysics Data System (ADS)

    Jousset, P.; Neuberg, J.

    2003-04-01

    Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on rheological magma properties and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2D finite difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a 2D homogeneous viscoelastic medium with topography. Topography is introduced by using a mapping procedure that stretches the computational rectangular grid into a grid which follows the topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid for seismic frequencies (i.e., above 2 Hz). Results demonstrate that attenuation modifies both amplitude and dispersive characteristics of low-frequency earthquakes. Low-frequency events are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.

  16. Nonextensive models for earthquakes.

    PubMed

    Silva, R; França, G S; Vilar, C S; Alcaniz, J S

    2006-02-01

    We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.

  17. GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.

    2017-12-01

    On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)

  18. Bayesian exploration of recent Chilean earthquakes

    NASA Astrophysics Data System (ADS)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah

    2016-04-01

    The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.

  19. Prospective testing of Coulomb short-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  20. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to

  1. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  2. Modeling of the strong ground motion of 25th April 2015 Nepal earthquake using modified semi-empirical technique

    NASA Astrophysics Data System (ADS)

    Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.

    2018-05-01

    On 25th April, 2015 a hazardous earthquake of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal earthquake which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the earthquake is about 420-515 km. Modified semi-empirical technique of modeling finite faults has been used in this paper to simulate strong earthquake at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune model in the present study which are used in the modeling of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune model .The simulated earthquake time series were compared with the observed records of the earthquake. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the earthquake was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.

  3. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    NASA Astrophysics Data System (ADS)

    Thomas, Marion Y.; Bhat, Harsha S.

    2018-05-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  4. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    NASA Astrophysics Data System (ADS)

    Thomas, M. Y.; Bhat, H. S.

    2017-12-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  5. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aagaard, B; Brocher, T; Dreger, D

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sitesmore » throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.« less

  6. Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide

    USGS Publications Warehouse

    Miles, Scott B.; Keefer, David K.

    2007-01-01

    This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.

  7. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  8. Non-conservative perturbations of homoclinic snaking scenarios

    NASA Astrophysics Data System (ADS)

    Knobloch, Jürgen; Vielitz, Martin

    2016-01-01

    Homoclinic snaking refers to the continuation of homoclinic orbits to an equilibrium E near a heteroclinic cycle connecting E and a periodic orbit P. Typically homoclinic snaking appears in one-parameter families of reversible, conservative systems. Here we discuss perturbations of this scenario which are both non-reversible and non-conservative. We treat this problem analytically in the spirit of the work [3]. The continuation of homoclinic orbits happens with respect to both the original continuation parameter μ and the perturbation parameter λ. The continuation curves are parametrised by the dwelling time L of the homoclinic orbit near P. It turns out that λ (L) tends to zero while the μ vs. L diagram displays isolas or criss-cross snaking curves in a neighbourhood of the original snakes-and-ladder structure. In the course of our studies we adapt both Fenichel coordinates near P and the analysis of Shilnikov problems near P to the present situation.

  9. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    USGS Publications Warehouse

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  10. Seismic quiescence in a frictional earthquake model

    NASA Astrophysics Data System (ADS)

    Braun, Oleg M.; Peyrard, Michel

    2018-04-01

    We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff model for earthquakes and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the model can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.

  11. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  12. GEM1: First-year modeling and IT activities for the Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Anderson, G.; Giardini, D.; Wiemer, S.

    2009-04-01

    GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global

  13. Implications of the 26 December 2004 Sumatra-Andaman earthquake on tsunami forecast and assessment models for great subduction-zone earthquakes

    USGS Publications Warehouse

    Geist, Eric L.; Titov, Vasily V.; Arcas, Diego; Pollitz, Fred F.; Bilek, Susan L.

    2007-01-01

    Results from different tsunami forecasting and hazard assessment models are compared with observed tsunami wave heights from the 26 December 2004 Indian Ocean tsunami. Forecast models are based on initial earthquake information and are used to estimate tsunami wave heights during propagation. An empirical forecast relationship based only on seismic moment provides a close estimate to the observed mean regional and maximum local tsunami runup heights for the 2004 Indian Ocean tsunami but underestimates mean regional tsunami heights at azimuths in line with the tsunami beaming pattern (e.g., Sri Lanka, Thailand). Standard forecast models developed from subfault discretization of earthquake rupture, in which deep- ocean sea level observations are used to constrain slip, are also tested. Forecast models of this type use tsunami time-series measurements at points in the deep ocean. As a proxy for the 2004 Indian Ocean tsunami, a transect of deep-ocean tsunami amplitudes recorded by satellite altimetry is used to constrain slip along four subfaults of the M >9 Sumatra–Andaman earthquake. This proxy model performs well in comparison to observed tsunami wave heights, travel times, and inundation patterns at Banda Aceh. Hypothetical tsunami hazard assessments models based on end- member estimates for average slip and rupture length (Mw 9.0–9.3) are compared with tsunami observations. Using average slip (low end member) and rupture length (high end member) (Mw 9.14) consistent with many seismic, geodetic, and tsunami inversions adequately estimates tsunami runup in most regions, except the extreme runup in the western Aceh province. The high slip that occurred in the southern part of the rupture zone linked to runup in this location is a larger fluctuation than expected from standard stochastic slip models. In addition, excess moment release (∼9%) deduced from geodetic studies in comparison to seismic moment estimates may generate additional tsunami energy, if the

  14. Wetting of nonconserved residue-backbones: A feature indicative of aggregation associated regions of proteins.

    PubMed

    Pradhan, Mohan R; Pal, Arumay; Hu, Zhongqiao; Kannan, Srinivasaraghavan; Chee Keong, Kwoh; Lane, David P; Verma, Chandra S

    2016-02-01

    Aggregation is an irreversible form of protein complexation and often toxic to cells. The process entails partial or major unfolding that is largely driven by hydration. We model the role of hydration in aggregation using "Dehydrons." "Dehydrons" are unsatisfied backbone hydrogen bonds in proteins that seek shielding from water molecules by associating with ligands or proteins. We find that the residues at aggregation interfaces have hydrated backbones, and in contrast to other forms of protein-protein interactions, are under less evolutionary pressure to be conserved. Combining evolutionary conservation of residues and extent of backbone hydration allows us to distinguish regions on proteins associated with aggregation (non-conserved dehydron-residues) from other interaction interfaces (conserved dehydron-residues). This novel feature can complement the existing strategies used to investigate protein aggregation/complexation. © 2015 Wiley Periodicals, Inc.

  15. Collective properties of injection-induced earthquake sequences: 1. Model description and directivity bias

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny

    2016-05-01

    Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.

  16. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  17. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  18. Nonextensive models for earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, R.; Franca, G.S.; Vilar, C.S.

    2006-02-15

    We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment {epsilon}{proportional_to}r{sup 3}. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofisica.more » Although both approaches provide very similar values for the nonextensive parameter q, other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.« less

  19. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    USGS Publications Warehouse

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as

  20. Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps

    NASA Astrophysics Data System (ADS)

    Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.

    2014-12-01

    Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.

  1. Experimental validation of finite element model analysis of a steel frame in simulated post-earthquake fire environments

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Bevans, W. J.; Xiao, Hai; Zhou, Zhi; Chen, Genda

    2012-04-01

    During or after an earthquake event, building system often experiences large strains due to shaking effects as observed during recent earthquakes, causing permanent inelastic deformation. In addition to the inelastic deformation induced by the earthquake effect, the post-earthquake fires associated with short fuse of electrical systems and leakage of gas devices can further strain the already damaged structures during the earthquakes, potentially leading to a progressive collapse of buildings. Under these harsh environments, measurements on the involved building by various sensors could only provide limited structural health information. Finite element model analysis, on the other hand, if validated by predesigned experiments, can provide detail structural behavior information of the entire structures. In this paper, a temperature dependent nonlinear 3-D finite element model (FEM) of a one-story steel frame is set up by ABAQUS based on the cited material property of steel from EN 1993-1.2 and AISC manuals. The FEM is validated by testing the modeled steel frame in simulated post-earthquake environments. Comparisons between the FEM analysis and the experimental results show that the FEM predicts the structural behavior of the steel frame in post-earthquake fire conditions reasonably. With experimental validations, the FEM analysis of critical structures could be continuously predicted for structures in these harsh environments for a better assistant to fire fighters in their rescue efforts and save fire victims.

  2. E-DECIDER: Using Earth Science Data and Modeling Tools to Develop Decision Support for Earthquake Disaster Response

    NASA Astrophysics Data System (ADS)

    Glasscoe, Margaret T.; Wang, Jun; Pierce, Marlon E.; Yoder, Mark R.; Parker, Jay W.; Burl, Michael C.; Stough, Timothy M.; Granat, Robert A.; Donnellan, Andrea; Rundle, John B.; Ma, Yu; Bawden, Gerald W.; Yuen, Karen

    2015-08-01

    Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). This in turn is delivered through standards-compliant web services for desktop and hand-held devices.

  3. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  4. Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan

    NASA Astrophysics Data System (ADS)

    Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.

    2017-12-01

    An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding

  5. Dynamic modeling of normal faults of the 2016 Central Italy earthquake sequence

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo

    2017-04-01

    The earthquake sequence of the Central Italy in 2016 are characterized mainly by the Mw6.0 24th August, Mw5.9 26th October and Mw6.4 30th October as well as two Mw5.4 earthquakes (24th August, 26th October) (catalogue INGV). They all show normal faulting mechanisms corresponding to the Apennines's tectonics. They are aligned briefly along NNW-SSE axis, and they may not be on a single continuous fault plane. Therefore, dynamic rupture modeling of sequences should be carried out supposing co-planar normal multiple segments. We apply a Boundary Domain Method (BDM, Goto and Bielak, GJI, 2008) coupling a boundary integral equation method and a domain-based method, namely a finite difference method in this study. The Mw6.0 24th August earthquake is modeled. We use the basic information of hypocenter position, focal mechanism and potential ruptured dimension from the INGV catalogue and Tinti et al., GRL, 2016), and begin with a simple condition (homogeneous boundary condition). From our preliminary simulations, it is shown that a uniformly extended rupture model does not fit the near-field ground motions and localized heterogeneity would be required.

  6. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model

  7. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  8. A fragmentation model of earthquake-like behavior in internet access activity

    NASA Astrophysics Data System (ADS)

    Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.

    We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.

  9. Simulation of the Burridge-Knopoff model of earthquakes with variable range stress transfer.

    PubMed

    Xia, Junchao; Gould, Harvey; Klein, W; Rundle, J B

    2005-12-09

    Simple models of earthquake faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton models have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff model, which includes more realistic friction forces and inertia. We find that the latter model with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton models and the usual Burridge-Knopoff model with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of earthquakes and other driven dissipative systems.

  10. Constructing a Laser Stabilization System for a Parity Non-Conservation Experiment with Francium

    NASA Astrophysics Data System (ADS)

    Dehart, A. C.; Gwinner, Gerald; Kossin, Michael; Behr, John; Gorelov, Alexandre; Kalita, Mukut; Pearson, Matthew; Aubin, Seth; Gomez Garcia, Eduardo; Orozco, Luis

    2017-04-01

    We are developing an experiment at TRIUMF to test the Standard model at low energies by measuring Parity Non-Conservation (PNC) effects in francium. Current efforts include preparations to study the 7s - 8s electric dipole (E1) forbidden transition in francium at 507 nm under the influence of an electric field. Fr has no stable isotope; therefore to frequency-stabilize our laser at 507 nm, we are developing a laser stabilization system by using the Pound-Drever-Hall technique with a Fabry-Perot cavity made of Ultra Low Expansion Glass (ULE) as our stable frequency reference. The system will stabilize a 1014 nm laser, which will be frequency doubled to 507 nm, before sending the light to our cold and trapped francium sample. We will report on our recent experiences with the laser stabilization system. Supported by NSERC, NRC/TRIUMF, DOE, NSF, CONACYT, Fulbright, and U. of Manitoba.

  11. Modeling Channel Movement Response to Rainfall Variability and Potential Threats to Post-earthquake Reconstruction

    NASA Astrophysics Data System (ADS)

    Xie, J.; Wang, M.; Liu, K.

    2017-12-01

    The 2008 Wenchuan Ms 8.0 earthquake caused overwhelming destruction to vast mountains areas in Sichuan province. Numerous seismic landslides damaged the forest and vegetation cover, and caused substantial loose sediment piling up in the valleys. The movement and fill-up of loose materials led to riverbeds aggradation, thus made the earthquake-struck area more susceptible to flash floods with increasing frequency and intensity of extreme rainfalls. This study investigated the response of sediment and river channel evolution to different rainfall scenarios after the Wenchuan earthquake. The study area was chosen in a catchment affected by the earthquake in Northeast Sichuan province, China. We employed the landscape evolution model CAESAR-lisflood to explore the material migration rules and then assessed the potential effects under two rainfall scenarios. The model parameters were calibrated using the 2013 extreme rainfall event, and the experimental rainfall scenarios were of different intensity and frequency over a 10-year period. The results indicated that CAESAR-lisflood was well adapted to replicate the sediment migration, particularly the fluvial processes after earthquake. With respect to the effects of rainfall intensity, the erosion severity in upstream gullies and the deposition severity in downstream channels, correspondingly increased with the increasing intensity of extreme rainfalls. The modelling results showed that buildings in the catchment suffered from flash floods increased by more than a quarter from the normal to the enhanced rainfall scenarios in ten years, which indicated a potential threat to the exposures nearby the river channel, in the context of climate change. Simulation on landscape change is of great significance, and contributes to early warning of potential geological risks after earthquake. Attention on the high risk area by local government and the public is highly suggested in our study.

  12. Earthquake precursors: activation or quiescence?

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.

    2011-10-01

    We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These

  13. Three-dimensional compressional wavespeed model, earthquake relocations, and focal mechanisms for the Parkfield, California, region

    USGS Publications Warehouse

    Thurber, C.; Zhang, H.; Waldhauser, F.; Hardebeck, J.; Michael, A.; Eberhart-Phillips, D.

    2006-01-01

    We present a new three-dimensional (3D) compressional vvavespeed (V p) model for the Parkfield region, taking advantage of the recent seismicity associated with the 2003 San Simeon and 2004 Parkfield earthquake sequences to provide increased model resolution compared to the work of Eberhart-Phillips and Michael (1993) (EPM93). Taking the EPM93 3D model as our starting model, we invert the arrival-time data from about 2100 earthquakes and 250 shots recorded on both permanent network and temporary stations in a region 130 km northeast-southwest by 120 km northwest-southeast. We include catalog picks and cross-correlation and catalog differential times in the inversion, using the double-difference tomography method of Zhang and Thurber (2003). The principal Vp features reported by EPM93 and Michelini and McEvilly (1991) are recovered, but with locally improved resolution along the San Andreas Fault (SAF) and near the active-source profiles. We image the previously identified strong wavespeed contrast (faster on the southwest side) across most of the length of the SAF, and we also improve the image of a high Vp body on the northeast side of the fault reported by EPM93. This narrow body is at about 5- to 12-km depth and extends approximately from the locked section of the SAP to the town of Parkfield. The footwall of the thrust fault responsible for the 1983 Coalinga earthquake is imaged as a northeast-dipping high wavespeed body. In between, relatively low wavespeeds (<5 km/sec) extend to as much as 10-km depth. We use this model to derive absolute locations for about 16,000 earthquakes from 1966 to 2005 and high-precision double-difference locations for 9,000 earthquakes from 1984 to 2005, and also to determine focal mechanisms for 446 earthquakes. These earthquake locations and mechanisms show that the seismogenic fault is a simple planar structure. The aftershock sequence of the 2004 mainshock concentrates into the same structures defined by the pre-2004 seismicity

  14. Modelling Psychological Responses to the Great East Japan Earthquake and Nuclear Incident

    PubMed Central

    Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.

    2012-01-01

    The Great East Japan (Tōhoku/Kanto) earthquake of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380

  15. Hydromechanical Earthquake Nucleation Model Forecasts Onset, Peak, and Falling Rates of Induced Seismicity in Oklahoma and Kansas

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2018-04-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. We develop a reservoir model to calculate the hydrologic conditions associated with the activity of 902 saltwater disposal wells injecting into the Arbuckle aquifer. Estimates of basement fault stressing conditions inform a rate-and-state friction earthquake nucleation model to forecast the seismic response to injection. Our model replicates many salient features of the induced earthquake sequence, including the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. We present evidence for variable time lags between changes in injection and seismicity rates, consistent with the prediction from rate-and-state theory that seismicity rate transients occur over timescales inversely proportional to stressing rate. Given the efficacy of the hydromechanical model, as confirmed through a likelihood statistical test, the results of this study support broader integration of earthquake physics within seismic hazard analysis.

  16. Self-Organized Criticality in an Anisotropic Earthquake Model

    NASA Astrophysics Data System (ADS)

    Li, Bin-Quan; Wang, Sheng-Jun

    2018-03-01

    We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5

  17. Strategies for rapid global earthquake impact estimation: the Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, D.J.

    2013-01-01

    This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.

  18. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.; Herman, Matthew W.; Barnhart, William D.; Furlong, Kevin P.; Riquelme, Sebástian; Benz, Harley M.; Bergman, Eric; Barrientos, Sergio; Earle, Paul S.; Samsonov, Sergey

    2014-01-01

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile which had not ruptured in a megathrust earthquake since a M ~8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March–April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  19. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake.

    PubMed

    Hayes, Gavin P; Herman, Matthew W; Barnhart, William D; Furlong, Kevin P; Riquelme, Sebástian; Benz, Harley M; Bergman, Eric; Barrientos, Sergio; Earle, Paul S; Samsonov, Sergey

    2014-08-21

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which had not ruptured in a megathrust earthquake since a M ∼8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March-April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  20. Children's emotional experience two years after an earthquake: An exploration of knowledge of earthquakes and associated emotions.

    PubMed

    Raccanello, Daniela; Burro, Roberto; Hall, Rob

    2017-01-01

    We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children's emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children's understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Our data extend the generalizability of theoretical models on children's psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and provide further knowledge on children's emotional resources related to natural disasters, as a basis for planning educational prevention programs.

  1. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of

  2. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  3. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency

  4. Children's emotional experience two years after an earthquake: An exploration of knowledge of earthquakes and associated emotions

    PubMed Central

    Burro, Roberto; Hall, Rob

    2017-01-01

    A major earthquake has a potentially highly traumatic impact on children’s psychological functioning. However, while many studies on children describe negative consequences in terms of mental health and psychiatric disorders, little is known regarding how the developmental processes of emotions can be affected following exposure to disasters. Objectives We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children’s emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. Method The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children’s understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. Results We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Conclusions Our data extend the generalizability of theoretical models on children’s psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and

  5. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so

  6. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    PubMed

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  7. Extending earthquakes' reach through cascading.

    PubMed

    Marsan, David; Lengliné, Olivier

    2008-02-22

    Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.

  8. Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions

    NASA Astrophysics Data System (ADS)

    Kim, A.; Dreger, D.; Larsen, S.

    2008-12-01

    We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0

  9. Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models

    NASA Astrophysics Data System (ADS)

    Biemiller, J.; Lavier, L. L.; Wallace, L.

    2016-12-01

    Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.

  10. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  11. The use of the Finite Element method for the earthquakes modelling in different geodynamic environments

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Tizzani, Pietro

    2016-04-01

    Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally

  12. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    NASA Astrophysics Data System (ADS)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  13. The Mw 7.7 Bhuj earthquake: Global lessons for earthquake hazard in intra-plate regions

    USGS Publications Warehouse

    Schweig, E.; Gomberg, J.; Petersen, M.; Ellis, M.; Bodin, P.; Mayrose, L.; Rastogi, B.K.

    2003-01-01

    The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.

  14. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  15. A catalog of coseismic uniform-slip models of geodetically unstudied earthquakes along the Sumatran plate boundary

    NASA Astrophysics Data System (ADS)

    Wong, N. Z.; Feng, L.; Hill, E.

    2017-12-01

    The Sumatran plate boundary has experienced five Mw > 8 great earthquakes, a handful of Mw 7-8 earthquakes and numerous small to moderate events since the 2004 Mw 9.2 Sumatra-Andaman earthquake. The geodetic studies of these moderate earthquakes have mostly been passed over in favour of larger events. We therefore in this study present a catalog of coseismic uniform-slip models of one Mw 7.2 earthquake and 17 Mw 5.9-6.9 events that have mostly gone geodetically unstudied. These events occurred close to various continuous stations within the Sumatran GPS Array (SuGAr), allowing the network to record their surface deformation. However, due to their relatively small magnitudes, most of these moderate earthquakes were recorded by only 1-4 GPS stations. With the limited observations per event, we first constrain most of the model parameters (e.g. location, slip, patch size, strike, dip, rake) using various external sources (e.g., the ANSS catalog, gCMT, Slab1.0, and empirical relationships). We then use grid-search forward models to explore a range of some of these parameters (geographic position for all events and additionally depth for some events). Our results indicate the gCMT centroid locations in the Sumatran subduction zone might be biased towards the west for smaller events, while ANSS epicentres might be biased towards the east. The more accurate locations of these events are potentially useful in understanding the nature of various structures along the megathrust, particularly the persistent rupture barriers.

  16. Forecasting Induced Seismicity Using Saltwater Disposal Data and a Hydromechanical Earthquake Nucleation Model

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2017-12-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.

  17. Self-organized criticality occurs in non-conservative neuronal networks during `up' states

    NASA Astrophysics Data System (ADS)

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-10-01

    During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.

  18. Sensing the earthquake

    NASA Astrophysics Data System (ADS)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  19. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  20. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  1. Issues on the Japanese Earthquake Hazard Evaluation

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Fukushima, Y.; Sagiya, T.

    2013-12-01

    The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in

  2. The 1906 earthquake and a century of progress in understanding earthquakes and their hazards

    USGS Publications Warehouse

    Zoback, M.L.

    2006-01-01

    The 18 April 1906 San Francisco earthquake killed nearly 3000 people and left 225,000 residents homeless. Three days after the earthquake, an eight-person Earthquake Investigation Commission composed of 25 geologists, seismologists, geodesists, biologists and engineers, as well as some 300 others started work under the supervision of Andrew Lawson to collect and document physical phenomena related to the quake . On 31 May 1906, the commission published a preliminary 17-page report titled "The Report of the State Earthquake Investigation Commission". The report included the bulk of the geological and morphological descriptions of the faulting, detailed reports on shaking intensity, as well as an impressive atlas of 40 oversized maps and folios. Nearly 100 years after its publication, the Commission Report remains a model for post-earthquake investigations. Because the diverse data sets were so complete and carefully documented, researchers continue to apply modern analysis techniques to learn from the 1906 earthquake. While the earthquake marked a seminal event in the history of California, it served as impetus for the birth of modern earthquake science in the United States.

  3. Magnitude and location of historical earthquakes in Japan and implications for the 1855 Ansei Edo earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    2005-01-01

    Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.

  4. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  5. A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.

    2014-12-01

    A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.

  6. Quasi-dynamic earthquake fault systems with rheological heterogeneity

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2009-12-01

    Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.

  7. 3-D velocity structure model for long-period ground motion simulation of the hypothetical Nankai Earthquake

    NASA Astrophysics Data System (ADS)

    Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.

    2010-12-01

    Three dimensional velocity structure model of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction earthquakes. The model is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural earthquakes. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The model has been revised through numbers of simulations of small to middle earthquakes as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai Earthquake with the 3-D velocity structure model. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source model and newly developed velocity structure model. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small earthquake

  8. Earthquake triggering in the peri-adriatic regions induced by stress diffusion: insights from numerical modelling

    NASA Astrophysics Data System (ADS)

    D'Onza, F.; Viti, M.; Mantovani, E.; Albarello, D.

    2003-04-01

    EARTHQUAKE TRIGGERING IN THE PERI-ADRIATIC REGIONS INDUCED BY STRESS DIFFUSION: INSIGHTS FROM NUMERICAL MODELLING F. D’Onza (1), M. Viti (1), E. Mantovani (1) and D. Albarello (1) (1) Dept. of Earth Sciences, University of Siena - Italy (donza@unisi.it/Fax:+39-0577-233820) Significant evidence suggests that major earthquakes in the peri-Adriatic Balkan zones may influence the seismicity pattern in the Italian area. In particular, a seismic correlation has been recognized between major earthquakes in the southern Dinaric belt and those in southern Italy. It is widely recognized that such kind of regularities may be an effect of postseismic relaxation triggered by strong earthquakes. In this note, we describe an attempt to quantitatively investigate, by numerical modelling, the reliability of the above interpretation. In particular, we have explored the possibility to explain the last example of the presumed correlation (triggering event: April, 1979 Montenegro earthquake, MS=6.7; induced event: November, 1980 Irpinia event, MS=6.9) as an effect of postseismic relaxation through the Adriatic plate. The triggering event is modelled by imposing a sudden dislocation in the Montenegro seismic fault, taking into account the fault parameters (length and average slip) recognized from seismological observations. The perturbation induced by the seismic source in the neighbouring lithosphere is obtained by the Elsasser diffusion equation for an elastic lithosphere coupled with a viscous asthenosphere. The results obtained by numerical experiments indicate that the strain regime induced by the Montenegro event in southern Italy is compatible with the tensional strain field observed in this last zone, that the amplitude of the induced strain is significantly higher than that induced by Earth tides and that this amplitude is comparable with the strain perturbation recognized as responsible for earthquake triggering. The time delay between the triggering and the induced

  9. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    USGS Publications Warehouse

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  10. Pre-earthquake magnetic pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J.; Freund, F.

    2015-08-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  11. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  12. Geodetic Imaging of the Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Tong, Xiaopeng

    In this dissertation I used Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) to recover crustal deformation caused by earthquake cycle processes. The studied areas span three different types of tectonic boundaries: a continental thrust earthquake (M7.9 Wenchuan, China) at the eastern margin of the Tibet plateau, a mega-thrust earthquake (M8.8 Maule, Chile) at the Chile subduction zone, and the interseismic deformation of the San Andreas Fault System (SAFS). A new L-band radar onboard a Japanese satellite ALOS allows us to image high-resolution surface deformation in vegetated areas, which is not possible with older C-band radar systems. In particular, both the Wenchuan and Maule InSAR analyses involved L-band ScanSAR interferometry which had not been attempted before. I integrated a large InSAR dataset with dense GPS networks over the entire SAFS. The integration approach features combining the long-wavelength deformation from GPS with the short-wavelength deformation from InSAR through a physical model. The recovered fine-scale surface deformation leads us to better understand the underlying earthquake cycle processes. The geodetic slip inversion reveals that the fault slip of the Wenchuan earthquake is maximum near the surface and decreases with depth. The coseismic slip model of the Maule earthquake constrains the down-dip extent of the fault slip to be at 45 km depth, similar to the Moho depth. I inverted for the slip rate on 51 major faults of the SAFS using Green's functions for a 3-dimensional earthquake cycle model that includes kinematically prescribed slip events for the past earthquakes since the year 1000. A 60 km thick plate model with effective viscosity of 10 19 Pa · s is preferred based on the geodetic and geological observations. The slip rates recovered from the plate models are compared to the half-space model. The InSAR observation reveals that the creeping section of the SAFS is partially locked. This high

  13. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of

  14. Purposes and methods of scoring earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2010-12-01

    There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.

  15. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  16. Constraining Source Locations of Shallow Subduction Megathrust Earthquakes in 1-D and 3-D Velocity Models - A Case Study of the 2002 Mw=6.4 Osa Earthquake, Costa Rica

    NASA Astrophysics Data System (ADS)

    Grevemeyer, I.; Arroyo, I. G.

    2015-12-01

    Earthquake source locations are generally routinely constrained using a global 1-D Earth model. However, the source location might be associated with large uncertainties. This is definitively the case for earthquakes occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust earthquake in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large earthquakes along the Middle America Trench off central Costa Rica. The earthquake sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone earthquake. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity models. The 3-D model was either derived from 3-D tomography based on onshore stations and a priori model based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity model derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.

  17. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an

  18. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  19. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  20. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  1. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  2. Numerical Modeling of Initial Slip and Poroelastic Effects of the 2012 Costa Rica Earthquake Using GPS Data

    NASA Astrophysics Data System (ADS)

    McCormack, K. A.; Hesse, M. A.; Stadler, G.

    2015-12-01

    Remote sensing and geodetic measurements are providing a new wealth of spatially distributed, time-series data that have the ability to improve our understanding of co-seismic rupture and post-seismic processes in subduction zones. We formulate a Bayesian inverse problem to infer the slip distribution on the plate interface using an elastic finite element model and GPS surface deformation measurements. We present an application to the co-seismic displacement during the 2012 earthquake on the Nicoya Peninsula in Costa Rica, which is uniquely positioned close to the Middle America Trench and directly over the seismogenic zone of the plate interface. The results of our inversion are then used as an initial condition in a coupled poroelastic forward model to investigate the role of poroelastic effects on post-seismic deformation and stress transfer. From this study we identify a horseshoe-shaped rupture area with a maximum slip of approximately 2.5 meters surrounding a locked patch that is likely to release stress in the future. We model the co-seismic pore pressure change as well as the pressure evolution and resulting deformation in the months after the earthquake. The results of the forward model indicate that earthquake-induced pore pressure changes dissipate quickly near the surface, resulting in relaxation of the surface in the seven to ten days following the earthquake. Near the subducting slab interface, pore pressure changes are an order of magnitude larger and may persist for many months after the earthquake.

  3. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  4. Strong Ground Motion Analysis and Afterslip Modeling of Earthquakes near Mendocino Triple Junction

    NASA Astrophysics Data System (ADS)

    Gong, J.; McGuire, J. J.

    2017-12-01

    The Mendocino Triple Junction (MTJ) is one of the most seismically active regions in North America in response to the ongoing motions between North America, Pacific and Gorda plates. Earthquakes near the MTJ come from multiple types of faults due to the interaction boundaries between the three plates and the strong internal deformation within them. Understanding the stress levels that drive the earthquake rupture on the various types of faults and estimating the locking state of the subduction interface are especially important for earthquake hazard assessment. However due to lack of direct offshore seismic and geodetic records, only a few earthquakes' rupture processes have been well studied and the locking state of the subducted slab is not well constrained. In this study we first use the second moment inversion method to study the rupture process of the January 28, 2015 Mw 5.7 strike slip earthquake on Mendocino transform fault using strong ground motion records from Cascadia Initiative community experiment as well as onshore seismic networks. We estimate the rupture dimension to be of 6 km by 3 km and a stress drop of 7 MPa on the transform fault. Next we investigate the frictional locking state on the subduction interface through afterslip simulation based on coseismic rupture models of this 2015 earthquake and a Mw 6.5 intraplate eathquake inside Gorda plate whose slip distribution is inverted using onshore geodetic network in previous study. Different depths for velocity strengthening frictional properties to start at the downdip of the locked zone are used to simulate afterslip scenarios and predict the corresponding surface deformation (GPS) movements onshore. Our simulations indicate that locking depth on the slab surface is at least 14 km, which confirms that the next M8 earthquake rupture will likely reach the coastline and strong shaking should be expected near the coast.

  5. Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability.

    PubMed

    Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco

    2018-03-28

    Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.

  6. Field Investigations and a Tsunami Modeling for the 1766 Marmara Sea Earthquake, Turkey

    NASA Astrophysics Data System (ADS)

    Aykurt Vardar, H.; Altinok, Y.; Alpar, B.; Unlu, S.; Yalciner, A. C.

    2016-12-01

    Turkey is located on one of the world's most hazardous earthquake zones. The northern branch of the North Anatolian fault beneath the Sea of Marmara, where the population is most concentrated, is the most active fault branch at least since late Pliocene. The Sea of Marmara region has been affected by many large tsunamigenic earthquakes; the most destructive ones are 549, 553, 557, 740, 989, 1332, 1343, 1509, 1766, 1894, 1912 and 1999 events. In order to understand and determine the tsunami potential and their possible effects along the coasts of this inland sea, detailed documentary, geophysical and numerical modelling studies are needed on the past earthquakes and their associated tsunamis whose effects are presently unknown.On the northern coast of the Sea of Marmara region, the Kucukcekmece Lagoon has a high potential to trap and preserve tsunami deposits. Within the scope of this study, lithological content, composition and sources of organic matters in the lagoon's bottom sediments were studied along a 4.63 m-long piston core recovered from the SE margin of the lagoon. The sedimentary composition and possible sources of the organic matters along the core were analysed and their results were correlated with the historical events on the basis of dating results. Finally, a tsunami scenario was tested for May 22nd 1766 Marmara Sea Earthquake by using a widely used tsunami simulation model called NAMIDANCE. The results show that the candidate tsunami deposits at the depths of 180-200 cm below the lagoons bottom were related with the 1766 (May) earthquake. This work was supported by the Scientific Research Projects Coordination Unit of Istanbul University (Project 6384) and by the EU project TRANSFER for coring.

  7. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  8. Source mechanism inversion and ground motion modeling of induced earthquakes in Kuwait - A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.

    2016-12-01

    The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high

  9. SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records

    USGS Publications Warehouse

    Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.

    2013-01-01

    This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.

  10. Motif finding in DNA sequences based on skipping nonconserved positions in background Markov chains.

    PubMed

    Zhao, Xiaoyan; Sze, Sing-Hoi

    2011-05-01

    One strategy to identify transcription factor binding sites is through motif finding in upstream DNA sequences of potentially co-regulated genes. Despite extensive efforts, none of the existing algorithms perform very well. We consider a string representation that allows arbitrary ignored positions within the nonconserved portion of single motifs, and use O(2(l)) Markov chains to model the background distributions of motifs of length l while skipping these positions within each Markov chain. By focusing initially on positions that have fixed nucleotides to define core occurrences, we develop an algorithm to identify motifs of moderate lengths. We compare the performance of our algorithm to other motif finding algorithms on a few benchmark data sets, and show that significant improvement in accuracy can be obtained when the sites are sufficiently conserved within a given sample, while comparable performance is obtained when the site conservation rate is low. A software program (PosMotif ) and detailed results are available online at http://faculty.cse.tamu.edu/shsze/posmotif.

  11. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the

  12. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  13. The costs and benefits of reconstruction options in Nepal using the CEDIM FDA modelled and empirical analysis following the 2015 earthquake

    NASA Astrophysics Data System (ADS)

    Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard

    2016-04-01

    Over the days following the 2015 Nepal earthquake, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This modelling relied on historic losses from other Nepal earthquakes as well as detailed socioeconomic data and earthquake loss information via CATDAT. The modelled results were very close to the final death toll and reconstruction cost for the 2015 earthquake of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal earthquakes in rapid time post-event. The reconstruction cost and death toll model is then used as the base model for the examination of the effect of spending money on earthquake retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical modelling. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-earthquake is also explored as part of this analysis, as well as spending on other components outside of earthquakes. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf

  14. The non-conserved region of MRP is involved in the virulence of Streptococcus suis serotype 2

    PubMed Central

    Li, Quan; Fu, Yang; Ma, Caifeng; He, Yanan; Yu, Yanfei; Du, Dechao; Yao, Huochun; Lu, Chengping; Zhang, Wei

    2017-01-01

    ABSTRACT Muramidase-released protein (MRP) of Streptococcus suis serotype 2 (SS2) is an important epidemic virulence marker with an unclear role in bacterial infection. To investigate the biologic functions of MRP, 3 mutants named Δmrp, Δmrp domain 1 (Δmrp-d1), and Δmrp domain 2 (Δmrp-d2) were constructed to assess the phenotypic changes between the parental strain and the mutant strains. The results indicated that MRP domain 1 (MRP-D1, the non-conserved region of MRP from a virulent strain, a.a. 242–596) played a critical role in adherence of SS2 to host cells, compared with MRP domain 1* (MRP-D1*, the non-conserved region of MRP from a low virulent strain, a.a. 239–598) or MRP domain 2 (MRP-D2, the conserved region of MRP, a.a. 848–1222). We found that MRP-D1 but not MRP-D2, could bind specifically to fibronectin (FN), factor H (FH), fibrinogen (FG), and immunoglobulin G (IgG). Additionally, we confirmed that mrp-d1 mutation significantly inhibited bacteremia and brain invasion in a mouse infection model. The mrp-d1 mutation also attenuated the intracellular survival of SS2 in RAW246.7 macrophages, shortened the growth ability in pig blood and decreased the virulence of SS2 in BALB/c mice. Furthermore, antiserum against MRP-D1 was found to dramatically impede SS2 survival in pig blood. Finally, immunization with recombinant MRP-D1 efficiently enhanced murine viability after SS2 challenge, indicating its potential use in vaccination strategies. Collectively, these results indicated that MRP-D1 is involved in SS2 virulence and eloquently demonstrate the function of MRP in pathogenesis of infection. PMID:28362221

  15. The non-conserved region of MRP is involved in the virulence of Streptococcus suis serotype 2.

    PubMed

    Li, Quan; Fu, Yang; Ma, Caifeng; He, Yanan; Yu, Yanfei; Du, Dechao; Yao, Huochun; Lu, Chengping; Zhang, Wei

    2017-10-03

    Muramidase-released protein (MRP) of Streptococcus suis serotype 2 (SS2) is an important epidemic virulence marker with an unclear role in bacterial infection. To investigate the biologic functions of MRP, 3 mutants named Δmrp, Δmrp domain 1 (Δmrp-d1), and Δmrp domain 2 (Δmrp-d2) were constructed to assess the phenotypic changes between the parental strain and the mutant strains. The results indicated that MRP domain 1 (MRP-D1, the non-conserved region of MRP from a virulent strain, a.a. 242-596) played a critical role in adherence of SS2 to host cells, compared with MRP domain 1* (MRP-D1*, the non-conserved region of MRP from a low virulent strain, a.a. 239-598) or MRP domain 2 (MRP-D2, the conserved region of MRP, a.a. 848-1222). We found that MRP-D1 but not MRP-D2, could bind specifically to fibronectin (FN), factor H (FH), fibrinogen (FG), and immunoglobulin G (IgG). Additionally, we confirmed that mrp-d1 mutation significantly inhibited bacteremia and brain invasion in a mouse infection model. The mrp-d1 mutation also attenuated the intracellular survival of SS2 in RAW246.7 macrophages, shortened the growth ability in pig blood and decreased the virulence of SS2 in BALB/c mice. Furthermore, antiserum against MRP-D1 was found to dramatically impede SS2 survival in pig blood. Finally, immunization with recombinant MRP-D1 efficiently enhanced murine viability after SS2 challenge, indicating its potential use in vaccination strategies. Collectively, these results indicated that MRP-D1 is involved in SS2 virulence and eloquently demonstrate the function of MRP in pathogenesis of infection.

  16. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    NASA Astrophysics Data System (ADS)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  17. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  18. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  19. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  20. Temporal stress changes caused by earthquakes: A review

    USGS Publications Warehouse

    Hardebeck, Jeanne L.; Okada, Tomomi

    2018-01-01

    Earthquakes can change the stress field in the Earth’s lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth’s crust at plate boundaries is “strong” or “weak.” Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.

  1. Temporal Stress Changes Caused by Earthquakes: A Review

    NASA Astrophysics Data System (ADS)

    Hardebeck, Jeanne L.; Okada, Tomomi

    2018-02-01

    Earthquakes can change the stress field in the Earth's lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth's crust at plate boundaries is "strong" or "weak." Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.

  2. Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR

    USGS Publications Warehouse

    Wright, Tim J.; Lu, Z.; Wicks, Charles

    2003-01-01

    The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.

  3. Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR

    USGS Publications Warehouse

    Wright, Tim J.; Lu, Zhong; Wicks, Chuck

    2003-01-01

    The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.

  4. Fault model of the M7.1 intraslab earthquake on April 7 following the 2011 Great Tohoku earthquake (M9.0) estimated by the dense GPS network data

    NASA Astrophysics Data System (ADS)

    Miura, S.; Ohta, Y.; Ohzono, M.; Kita, S.; Iinuma, T.; Demachi, T.; Tachibana, K.; Nakayama, T.; Hirahara, S.; Suzuki, S.; Sato, T.; Uchida, N.; Hasegawa, A.; Umino, N.

    2011-12-01

    We propose a source fault model of the large intraslab earthquake with M7.1 deduced from a dense GPS network. The coseismic displacements obtained by GPS data analysis clearly show the spatial pattern specific to intraslab earthquakes not only in the horizontal components but also the vertical ones. A rectangular fault with uniform slip was estimated by a non-linear inversion approach. The results indicate that the simple rectangular fault model can explain the overall features of the observations. The amount of moment released is equivalent to Mw 7.17. The hypocenter depth of the main shock estimated by the Japan Meteorological Agency is slightly deeper than the neutral plane between down-dip compression (DC) and down-dip extension (DE) stress zones of the double-planed seismic zone. This suggests that the depth of the neutral plane was deepened by the huge slip of the 2011 M9.0 Tohoku earthquake, and the rupture of the thrust M7.1 earthquake was initiated at that depth, although more investigations are required to confirm this idea. The estimated fault plane has an angle of ~60 degrees from the surface of subducting Pacific plate. It is consistent with the hypothesis that intraslab earthquakes are thought to be reactivation of the preexisting hydrated weak zones made in bending process of oceanic plates around outer-rise regions.

  5. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  6. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  7. Physics of Earthquake Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  8. "ABC's Earthquake" (Experiments and models in seismology)

    NASA Astrophysics Data System (ADS)

    Almeida, Ana

    2017-04-01

    Ana Almeida, Portugal Almeida, Ana Escola Básica e Secundária Dr. Vieira de Carvalho Moreira da Maia, Portugal The purpose of this presentation, in poster format, is to disclose an activity which was planned and made by me, in a school on the north of Portugal, using a kit of materials simple and easy to use - the sismo-box. The activity "ABC's Earthquake" was developed under the discipline of Natural Sciences, with students from 7th grade, geosciences teachers and other areas. The possibility of work with the sismo-box was seen as an exciting and promising opportunity to promote science, seismology more specifically, to do science, when using the existing models in the box and with them implement the scientific method, to work and consolidate content and skills in the area of Natural Sciences, to have a time of sharing these materials with classmates, and also with other teachers from the different areas. Throughout the development of the activity, either with students or teachers, it was possible to see the admiration by the models presented in the earthquake-box, as well as, the interest and the enthusiasm in wanting to move and understand what the results after the proposed procedure in the script. With this activity, we managed to promote: - educational success in this subject; a "school culture" with active participation, with quality, rules, discipline and citizenship values; fully integration of students with special educational needs; strengthen the performance of the school as a cultural, informational and formation institution; provide activities to date and innovative; foment knowledge "to be, being and doing" and contribute to a moment of joy and discovery.Learn by doing!

  9. Slip model and Synthetic Broad-band Strong Motions for the 2015 Mw 8.3 Illapel (Chile) Earthquake.

    NASA Astrophysics Data System (ADS)

    Aguirre, P.; Fortuno, C.; de la Llera, J. C.

    2017-12-01

    The MW 8.3 earthquake that occurred on September 16th 2015 west of Illapel, Chile, ruptured a 200 km section of the plate boundary between 29º S and 33º S. SAR data acquired by the Sentinel 1A satellite was used to obtain the interferogram of the earthquake, and from it, the component of the displacement field of the surface in the line of sight of the satellite. Based on this interferogram, the corresponding coseismic slip distribution for the earthquake was determined based on different plausible finite fault geometries. The model that best fits the data gathered is one whose rupture surface is consistent with the Slab 1.0 model, with a constant strike angle of 4º and variable dip angle ranging from 2.7º near the trench to 24.3º down dip. Using this geometry the maximum slip obtained is 7.52 m and the corresponding seismic moment is 3.78·1021 equivalent to a moment magnitude Mw 8.3. Calculation of the Coulomb failure stress change induced by this slip distribution evidences a strong correlation between regions where stress is increased as consequence of the earthquake, and the occurrence of the most relevant aftershocks, providing a consistency check for the inversion procedure applied and its results.The finite fault model for the Illapel earthquake is used to test a hybrid methodology for generation of synthetic ground motions that combines a deterministic calculation of the low frequency content, with stochastic modelling of the high frequency signal. Strong ground motions are estimated at the location of seismic stations recording the Illapel earthquake. Such simulations include the effect of local soil conditions, which are modelled empirically based on H/V ratios obtained from a large database of historical seismic records. Comparison of observed and synthetic records based on the 5%-damped response spectra yield satisfactory results for locations where the site response function is more robustly estimated.

  10. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  11. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  12. A Simple Model for the Earthquake Cycle Combining Self-Organized Criticality with Critical Point Behavior

    NASA Astrophysics Data System (ADS)

    Newman, W. I.; Turcotte, D. L.

    2002-12-01

    We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."

  13. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  14. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  15. A Large Nonconserved Region of the Tethering Protein Leashin Is Involved in Regulating the Position, Movement, and Function of Woronin Bodies in Aspergillus oryzae

    PubMed Central

    Han, Pei; Jin, Feng Jie; Kitamoto, Katsuhiko

    2014-01-01

    The Woronin body is a Pezizomycotina-specific organelle that is typically tethered to the septum, but upon hyphal wounding, it plugs the septal pore to prevent excessive cytoplasmic loss. Leashin (LAH) is a large Woronin body tethering protein that contains highly conserved N- and C-terminal regions and a long (∼2,500-amino-acid) nonconserved middle region. As the involvement of the nonconserved region in Woronin body function has not been investigated, here, we functionally characterized individual regions of the LAH protein of Aspergillus oryzae (AoLAH). In an Aolah disruptant, no Woronin bodies were tethered to the septum, and hyphae had a reduced ability to prevent excessive cytoplasmic loss upon hyphal wounding. Localization analysis revealed that the N-terminal region of AoLAH associated with Woronin bodies dependently on AoWSC, which is homologous to Neurospora crassa WSC (Woronin body sorting complex), and that the C-terminal region was localized to the septum. Elastic movement of Woronin bodies was observed when visualized with an AoLAH N-terminal-region–enhanced green fluorescent protein (EGFP) fusion protein. An N- and C-terminal fusion construct lacking the nonconserved middle region of AoLAH was sufficient for the tethering of Woronin bodies to the septum. However, Woronin bodies were located closer to the septum and exhibited impaired elastic movement. Moreover, expression of middle-region-deleted AoLAH in the Aolah disruptant did not restore the ability to prevent excessive cytoplasmic loss. These findings indicate that the nonconserved middle region of AoLAH has functional importance for regulating the position, movement, and function of Woronin bodies. PMID:24813188

  16. Extinction of Liquid Conservation by Observation: Effects of Model's Age and Presence

    ERIC Educational Resources Information Center

    Robert, Michele; Charbonneau, Claude

    1977-01-01

    Seventy second-grade children who had succeeded on pretests involving liquid conservation observed a nonconserving model and were subsequently retested. Regression to nonconservation was obtained only among some of the children who had been submitted to maximal pressure in the presence of an adult model. (Author/JMB)

  17. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  18. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    DTIC Science & Technology

    2008-09-01

    explosions from earthquakes, using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites ...Battone et al., 2002). For example, in Figure 1 we compare an earthquake and an explosion at each of four major test sites (rows), bandpass filtered...explosions as the frequency increases. Note also there are interesting differences between the test sites , indicating that emplacement conditions (depth

  19. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  20. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  1. Non-conservation of global charges in the Brane Universe and baryogenesis

    NASA Astrophysics Data System (ADS)

    Dvali, Gia; Gabadadze, Gregory

    1999-08-01

    We argue that global charges, such as baryon or lepton number, are not conserved in theories with the Standard Model fields localized on the brane which propagates in higher-dimensional space-time. The global-charge non-conservation is due to quantum fluctuations of the brane surface. These fluctuations create ``baby branes'' that can capture some global charges and carry them away into the bulk of higher-dimensional space. Such processes are exponentially suppressed at low-energies, but can be significant at high enough temperatures or energies. These effects can lead to a new, intrinsically high-dimensional mechanism of baryogenesis. Baryon asymmetry might be produced due either to ``evaporation'' into the baby branes, or creation of the baryon number excess in collisions of two Brane Universes. As an example we discuss a possible cosmological scenario within the recently proposed ``Brane Inflation'' framework. Inflation is driven by displaced branes which slowly fall on top of each other. When the branes collide inflation stops and the Brane Universe reheats. During this non-equilibrium collision baryon number can be transported from one brane to another one. This results in the baryon number excess in our Universe which exactly equals to the hidden ``baryon number'' deficit in the other Brane Universe. © 1999

  2. Self-organized criticality occurs in non-conservative neuronal networks during Up states

    PubMed Central

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-01-01

    During sleep, under anesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between Up and Down states (UDS) characterized by distinct membrane potentials and spike rates [1, 2, 3, 4, 5]. Another phenomenon observed in preparations similar to those that exhibit UDS, such as anesthetized rats [6], brain slices and cultures devoid of sensory input [7], as well as awake monkey cortex [8] is self-organized criticality (SOC). This is characterized by activity “avalanches” whose size distributions obey a power law with critical exponent of about −32 and branching parameter near unity. Recent work has demonstrated SOC in conservative neuronal network models [9, 10], however critical behavior breaks down when biologically realistic non-conservatism is introduced [9]. We here report robust SOC behavior in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have 2 stable activity levels corresponding to Up and Down states, that the networks switch spontaneously between them, and that Up states are critical and Down states are subcritical. PMID:21804861

  3. Designing an Earthquake-Resistant Building

    ERIC Educational Resources Information Center

    English, Lyn D.; King, Donna T.

    2016-01-01

    How do cross-bracing, geometry, and base isolation help buildings withstand earthquakes? These important structural design features involve fundamental geometry that elementary school students can readily model and understand. The problem activity, Designing an Earthquake-Resistant Building, was undertaken by several classes of sixth- grade…

  4. Mega-earthquakes rupture flat megathrusts.

    PubMed

    Bletery, Quentin; Thomas, Amanda M; Rempel, Alan W; Karlstrom, Leif; Sladen, Anthony; De Barros, Louis

    2016-11-25

    The 2004 Sumatra-Andaman and 2011 Tohoku-Oki earthquakes highlighted gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution: A fast convergence rate and young buoyant lithosphere are not required to produce mega-earthquakes. We calculated the curvature along the major subduction zones of the world, showing that mega-earthquakes preferentially rupture flat (low-curvature) interfaces. A simplified analytic model demonstrates that heterogeneity in shear strength increases with curvature. Shear strength on flat megathrusts is more homogeneous, and hence more likely to be exceeded simultaneously over large areas, than on highly curved faults. Copyright © 2016, American Association for the Advancement of Science.

  5. Relation of landslides triggered by the Kiholo Bay earthquake to modeled ground motion

    USGS Publications Warehouse

    Harp, Edwin L.; Hartzell, Stephen H.; Jibson, Randall W.; Ramirez-Guzman, L.; Schmitt, Robert G.

    2014-01-01

    The 2006 Kiholo Bay, Hawaii, earthquake triggered high concentrations of rock falls and slides in the steep canyons of the Kohala Mountains along the north coast of Hawaii. Within these mountains and canyons a complex distribution of landslides was triggered by the earthquake shaking. In parts of the area, landslides were preferentially located on east‐facing slopes, whereas in other parts of the canyons no systematic pattern prevailed with respect to slope aspect or vertical position on the slopes. The geology within the canyons is homogeneous, so we hypothesize that the variable landslide distribution is the result of localized variation in ground shaking; therefore, we used a state‐of‐the‐art, high‐resolution ground‐motion simulation model to see if it could reproduce the landslide‐distribution patterns. We used a 3D finite‐element analysis to model earthquake shaking using a 10 m digital elevation model and slip on a finite‐fault model constructed from teleseismic records of the mainshock. Ground velocity time histories were calculated up to a frequency of 5 Hz. Dynamic shear strain also was calculated and compared with the landslide distribution. Results were mixed for the velocity simulations, with some areas showing correlation of landslide locations with peak modeled ground motions but many other areas showing no such correlation. Results were much improved for the comparison with dynamic shear strain. This suggests that (1) rock falls and slides are possibly triggered by higher frequency ground motions (velocities) than those in our simulations, (2) the ground‐motion velocity model needs more refinement, or (3) dynamic shear strain may be a more fundamental measurement of the decoupling process of slope materials during seismic shaking.

  6. Towards coupled earthquake dynamic rupture and tsunami simulations: The 2011 Tohoku earthquake.

    NASA Astrophysics Data System (ADS)

    Galvez, Percy; van Dinther, Ylona

    2016-04-01

    The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given an unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds suggesting two rupture fronts, possibly due to slip reactivation caused by frictional melting and thermal fluid pressurization effects. We created a 3D dynamic rupture model to reproduce this rupture reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops (Galvez et al, 2015) . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The seismograms agree roughly with seismic records along the coast of Japan. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The simulated sea floor displacement reaches 8-10 meters of uplift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge uplift, we ran tsunami simulations with the slip reactivation model and plug the sea floor displacements into GeoClaw (Finite element code for tsunami simulations, George and LeVeque, 2006). Our recent results compare well with the water height at the tsunami DART buoys 21401, 21413, 21418 and 21419 and show the potential using fully dynamic rupture results for tsunami studies for earthquake-tsunami scenarios.

  7. Reconciling postseismic and interseismic surface deformation around strike-slip faults: Earthquake-cycle models with finite ruptures and viscous shear zones

    NASA Astrophysics Data System (ADS)

    Hearn, E. H.

    2013-12-01

    Geodetic surface velocity data show that after an energetic but brief phase of postseismic deformation, surface deformation around most major strike-slip faults tends to be localized and stationary, and can be modeled with a buried elastic dislocation creeping at or near the Holocene slip rate. Earthquake-cycle models incorporating an elastic layer over a Maxwell viscoelastic halfspace cannot explain this, even when the earliest postseismic deformation is ignored or modeled (e.g., as frictional afterslip). Models with heterogeneously distributed low-viscosity materials or power-law rheologies perform better, but to explain all phases of earthquake-cycle deformation, Burgers viscoelastic materials with extreme differences between their Maxwell and Kelvin element viscosities seem to be required. I present a suite of earthquake-cycle models to show that postseismic and interseismic deformation may be reconciled for a range of lithosphere architectures and rheologies if finite rupture length is taken into account. These models incorporate high-viscosity lithosphere optionally cut by a viscous shear zone, and a lower-viscosity mantle asthenosphere (all with a range of viscoelastic rheologies and parameters). Characteristic earthquakes with Mw = 7.0 - 7.9 are investigated, with interseismic intervals adjusted to maintain the same slip rate (10, 20 or 40 mm/yr). I find that a high-viscosity lower crust/uppermost mantle (or a high viscosity per unit width viscous shear zone at these depths) is required for localized and stationary interseismic deformation. For Mw = 7.9 characteristic earthquakes, the shear zone viscosity per unit width in the lower crust and uppermost mantle must exceed about 10^16 Pa s /m. For a layered viscoelastic model the lower crust and uppermost mantle effective viscosity must exceed about 10^20 Pa s. The range of admissible shear zone and lower lithosphere rheologies broadens considerably for faults producing more frequent but smaller

  8. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    PubMed

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the dual process model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the dual process model.

  9. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    PubMed

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.

  10. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  11. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event

  12. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    . The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

  13. Probing Low-Mass Vector Bosons with Parity Nonconservation and Nuclear Anapole Moment Measurements in Atoms and Molecules

    NASA Astrophysics Data System (ADS)

    Dzuba, V. A.; Flambaum, V. V.; Stadnik, Y. V.

    2017-12-01

    In the presence of P -violating interactions, the exchange of vector bosons between electrons and nucleons induces parity-nonconserving (PNC) effects in atoms and molecules, while the exchange of vector bosons between nucleons induces anapole moments of nuclei. We perform calculations of such vector-mediated PNC effects in Cs, Ba+ , Yb, Tl, Fr, and Ra+ using the same relativistic many-body approaches as in earlier calculations of standard-model PNC effects, but with the long-range operator of the weak interaction. We calculate nuclear anapole moments due to vector-boson exchange using a simple nuclear model. From measured and predicted (within the standard model) values for the PNC amplitudes in Cs, Yb, and Tl, as well as the nuclear anapole moment of 133Cs, we constrain the P -violating vector-pseudovector nucleon-electron and nucleon-proton interactions mediated by a generic vector boson of arbitrary mass. Our limits improve on existing bounds from other experiments by many orders of magnitude over a very large range of vector-boson masses.

  14. Earthquake Clustering in Noisy Viscoelastic Systems

    NASA Astrophysics Data System (ADS)

    Dicaprio, C. J.; Simons, M.; Williams, C. A.; Kenner, S. J.

    2006-12-01

    Geologic studies show evidence for temporal clustering of earthquakes on certain fault systems. Since post- seismic deformation may result in a variable loading rate on a fault throughout the inter-seismic period, it is reasonable to expect that the rheology of the non-seismogenic lower crust and mantle lithosphere may play a role in controlling earthquake recurrence times. Previously, the role of rheology of the lithosphere on the seismic cycle had been studied with a one-dimensional spring-dashpot-slider model (Kenner and Simons [2005]). In this study we use the finite element code PyLith to construct a two-dimensional continuum model a strike-slip fault in an elastic medium overlying one or more linear Maxwell viscoelastic layers loaded in the far field by a constant velocity boundary condition. Taking advantage of the linear properties of the model, we use the finite element solution to one earthquake as a spatio-temporal Green's function. Multiple Green's function solutions, scaled by the size of each earthquake, are then summed to form an earthquake sequence. When the shear stress on the fault reaches a predefined yield stress it is allowed to slip, relieving all accumulated shear stress. Random variation in the fault yield stress from one earthquake to the next results in a temporally clustered earthquake sequence. The amount of clustering depends on a non-dimensional number, W, called the Wallace number. For models with one viscoelastic layer, W is equal to the standard deviation of the earthquake stress drop divided by the viscosity times the tectonic loading rate. This definition of W is modified from the original one used in Kenner and Simons [2005] by using the standard deviation of the stress drop instead of the mean stress drop. We also use a new, more appropriate, metric to measure the amount of temporal clustering of the system. W is the ratio of the viscoelastic relaxation rate of the system to the tectonic loading rate of the system. For values of

  15. Verlet scheme non-conservativeness for simulation of spherical particles collisional dynamics and method of its compensation

    NASA Astrophysics Data System (ADS)

    Savin, Andrei V.; Smirnov, Petr G.

    2018-05-01

    Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.

  16. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  17. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  18. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    USGS Publications Warehouse

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  19. Post-earthquake dilatancy recovery

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.

    1974-01-01

    Geodetic measurements of the 1964 Niigata, Japan earthquake and of three other examples are briefly examined. They show exponentially decaying subsidence for a year after the quakes. The observations confirm the dilatancy-fluid diffusion model of earthquake precursors and clarify the extent and properties of the dilatant zone. An analysis using one-dimensional consolidation theory is included which agrees well with this interpretation.

  20. Magnitude Dependent Seismic Quiescence of 2008 Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sacks, S. I.; Takanami, T.; Smith, D. E.; Rydelek, P. A.

    2014-12-01

    The change in seismicity leading to the Wenchuan Earthquake in 2008 (Mw 7.9) has been studied by various authors based on statistics and/or pattern recognitions (Huang, 2008; Yan et al., 2009; Chen and Wang, 2010; Yi et al., 2011). We show, in particular, that the magnitude-dependent seismic quiescence is observed for the Wenchuan earthquake and that it adds to other similar observations. Such studies on seismic quiescence prior to major earthquakes include 1982 Urakawa-Oki earthquake (M 7.1) (Taylor et al., 1992), 1994 Hokkaido-Toho-Oki earthquake (Mw=8.2) (Takanami et al., 1996), 2011 Tohoku earthquake (Mw=9.0) (Katsumata, 2011). Smith and Sacks (2013) proposed a magnitude-dependent quiescence based on a physical earthquake model (Rydelek and Sacks, 1995) and demonstrated the quiescence can be reproduced by the introduction of "asperities" (dilantacy hardened zones). Actual observations indicate the change occurs in a broader area than the eventual earthquake fault zone. In order to accept the explanation, we need to verify the model as the model predicts somewhat controversial features of earthquakes such as the magnitude dependent stress drop at lower magnitude range or the dynamically appearing asperities and repeating slips in some parts of the rupture zone. We show supportive observations. We will also need to verify the dilatancy diffusion to be taking place. So far, we only seem to have indirect evidences, which need to be more quantitatively substantiated.

  1. Observations and models of Co- and Post-Seismic Deformation Due to the 2015 Mw 7.8 Gorkha (Nepal) Earthquake

    NASA Astrophysics Data System (ADS)

    Wang, K.; Fialko, Y. A.

    2016-12-01

    The 2015 Mw 7.8 Gorkha (Nepal) earthquake occurred along the central Himalayan arc, a convergent boundary between India and Eurasian plates. We use space geodetic data to investigate co- and post-seismic deformation due to the Gorkha earthquake. Because the epicentral area of the earthquake is characterized by strong variations in surface relief and material properties, we developed finite element models that explicitly account for topography and 3-D elastic structure. Compared with slip models obtained using homogenous elastic half-space models, the model including elastic heterogeneity and topography exhibits greater (up to 10%) slip amplitude. GPS observations spanning more than 1 year following the earthquake show overall southward movement and uplift after the Gorkha earthquake, qualitatively similar to the coseismic deformation pattern. Kinematic inversions of GPS data, and forward modeling of stress-driven creep indicate that the observed post-seismic transient is consistent with afterslip on a down-dip extention of the seismic rupture. The Main Himalayan Thrust (MHT) has negligible creep updip of the 2015 rupture, reiterating a future seismic hazard. A poro-elastic rebound may contribute to the observed uplift southward motion, but the predicted surface displacements are small (on the order of 1 cm or less). We also tested a wide range of visco-elastic relaxation models, including 1-D and 3-D variations in the viscosity structure. All tested visco-elastic models predict the opposite signs of horizontal and vertical displacements compared to those observed. Available surface deformation data allow one to rule out a model of a low viscosity channel beneath Tibetan Plateau invoked to explain variations in surface relief at the plateau margins.

  2. Phase response curves for models of earthquake fault dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franović, Igor, E-mail: franovic@ipb.ac.rs; Kostić, Srdjan; Perc, Matjaž

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how themore » profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.« less

  3. Depth to the Juan De Fuca slab beneath the Cascadia subduction margin - a 3-D model for sorting earthquakes

    USGS Publications Warehouse

    McCrory, Patricia A.; Blair, J. Luke; Oppenheimer, David H.; Walter, Stephen R.

    2004-01-01

    We present an updated model of the Juan de Fuca slab beneath southern British Columbia, Washington, Oregon, and northern California, and use this model to separate earthquakes occurring above and below the slab surface. The model is based on depth contours previously published by Fluck and others (1997). Our model attempts to rectify a number of shortcomings in the original model and update it with new work. The most significant improvements include (1) a gridded slab surface in geo-referenced (ArcGIS) format, (2) continuation of the slab surface to its full northern and southern edges, (3) extension of the slab surface from 50-km depth down to 110-km beneath the Cascade arc volcanoes, and (4) revision of the slab shape based on new seismic-reflection and seismic-refraction studies. We have used this surface to sort earthquakes and present some general observations and interpretations of seismicity patterns revealed by our analysis. For example, deep earthquakes within the Juan de Fuca Plate beneath western Washington define a linear trend that may mark a tear within the subducting plate Also earthquakes associated with the northern stands of the San Andreas Fault abruptly terminate at the inferred southern boundary of the Juan de Fuca slab. In addition, we provide files of earthquakes above and below the slab surface and a 3-D animation or fly-through showing a shaded-relief map with plate boundaries, the slab surface, and hypocenters for use as a visualization tool.

  4. Application and evaluation of a rapid response earthquake-triggered landslide model to the 25 April 2015 Mw 7.8 Gorkha earthquake, Nepal

    USGS Publications Warehouse

    Gallen, Sean F.; Clark, Marin K.; Godt, Jonathan W.; Roback, Kevin; Niemi, Nathan A

    2017-01-01

    The 25 April 2015 Mw 7.8 Gorkha earthquake produced strong ground motions across an approximately 250 km by 100 km swath in central Nepal. To assist disaster response activities, we modified an existing earthquake-triggered landslide model based on a Newmark sliding block analysis to estimate the extent and intensity of landsliding and landslide dam hazard. Landslide hazard maps were produced using Shuttle Radar Topography Mission (SRTM) digital topography, peak ground acceleration (PGA) information from the U.S. Geological Survey (USGS) ShakeMap program, and assumptions about the regional rock strength based on end-member values from previous studies. The instrumental record of seismicity in Nepal is poor, so PGA estimates were based on empirical Ground Motion Prediction Equations (GMPEs) constrained by teleseismic data and felt reports. We demonstrate a non-linear dependence of modeled landsliding on aggregate rock strength, where the number of landslides decreases exponentially with increasing rock strength. Model estimates are less sensitive to PGA at steep slopes (> 60°) compared to moderate slopes (30–60°). We compare forward model results to an inventory of landslides triggered by the Gorkha earthquake. We show that moderate rock strength inputs over estimate landsliding in regions beyond the main slip patch, which may in part be related to poorly constrained PGA estimates for this event at far distances from the source area. Directly above the main slip patch, however, the moderate strength model accurately estimates the total number of landslides within the resolution of the model (landslides ≥ 0.0162 km2; observed n = 2214, modeled n = 2987), but the pattern of landsliding differs from observations. This discrepancy is likely due to the unaccounted for effects of variable material strength and local topographic amplification of strong ground motion, as well as other simplifying assumptions about source characteristics and their

  5. Source Model of the MJMA 6.5 Plate-Boundary Earthquake at the Nankai Trough, Southwest Japan, on April 1, 2016, Based on Strong Motion Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Asano, K.

    2017-12-01

    An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling

  6. Multi-scale coarse-graining of non-conservative interactions in molecular liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izvekov, Sergei, E-mail: sergiy.izvyekov.civ@mail.mil; Rice, Betsy M.

    2014-03-14

    A new bottom-up procedure for constructing non-conservative (dissipative and stochastic) interactions for dissipative particle dynamics (DPD) models is described and applied to perform hierarchical coarse-graining of a polar molecular liquid (nitromethane). The distant-dependent radial and shear frictions in functional-free form are derived consistently with a chosen form for conservative interactions by matching two-body force-velocity and three-body velocity-velocity correlations along the microscopic trajectories of the centroids of Voronoi cells (clusters), which represent the dissipative particles within the DPD description. The Voronoi tessellation is achieved by application of the K-means clustering algorithm at regular time intervals. Consistently with a notion of many-bodymore » DPD, the conservative interactions are determined through the multi-scale coarse-graining (MS-CG) method, which naturally implements a pairwise decomposition of the microscopic free energy. A hierarchy of MS-CG/DPD models starting with one molecule per Voronoi cell and up to 64 molecules per cell is derived. The radial contribution to the friction appears to be dominant for all models. As the Voronoi cell sizes increase, the dissipative forces rapidly become confined to the first coordination shell. For Voronoi cells of two and more molecules the time dependence of the velocity autocorrelation function becomes monotonic and well reproduced by the respective MS-CG/DPD models. A comparative analysis of force and velocity correlations in the atomistic and CG ensembles indicates Markovian behavior with as low as two molecules per dissipative particle. The models with one and two molecules per Voronoi cell yield transport properties (diffusion and shear viscosity) that are in good agreement with the atomistic data. The coarser models produce slower dynamics that can be appreciably attributed to unaccounted dissipation introduced by regular Voronoi re-partitioning as well as by larger

  7. The 2016 central Italy earthquake sequence: surface effects, fault model and triggering scenarios

    NASA Astrophysics Data System (ADS)

    Chatzipetros, Alexandros; Pavlides, Spyros; Papathanassiou, George; Sboras, Sotiris; Valkaniotis, Sotiris; Georgiadis, George

    2017-04-01

    The results of fieldwork performed during the 2016 earthquake sequence around the karstic basins of Norcia and La Piana di Castelluccio, at an altitude of 1400 m, on the Monte Vettore (altitude 2476 m) and Vettoretto, as well as the three mapped seismogenic faults, striking NNW-SSW, are presented in this paper. Surface co-seismic ruptures were observed in the Vettore and Vettoretto segment of the fault for several kilometres ( 7 km) in the August earthquakes at high altitudes, and were re-activated and expanded northwards during the October earthquakes. Coseismic ruptures and the neotectonic Mt. Vettore fault zone were modelled in detail using images acquired from specifically planned UAV (drone) flights. Ruptures, typically with displacement of up to 20 cm, were observed after the August event both in the scree and weathered mantle (elluvium), as well as the bedrock, consisting mainly of fragmented carbonate rocks with small tectonic surfaces. These fractures expanded and new ones formed during the October events, typically of displacements of up to 50 cm, although locally higher displacements of up to almost 2 m were observed. Hundreds of rock falls and landslides were mapped through satellite imagery, using pre- and post- earthquake Sentinel 2A images. Several of them were also verified in the field. Based on field mapping results and seismological information, the causative faults were modelled. The model consists of five seismogenic sources, each one associated with a strong event in the sequence. The visualisation of the seismogenic sources follows INGV's DISS standards for the Individual Seismogenic Sources (ISS) layer, while strike, dip and rake of the seismic sources are obtained from selected focal mechanisms. Based on this model, the ground deformation pattern was inferred, using Okada's dislocation solution formulae, which shows that the maximum calculated vertical displacement is 0.53 m. This is in good agreement with the statistical analysis of the

  8. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  9. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  10. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    USGS Publications Warehouse

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  11. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul

    2017-12-01

    This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  12. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  13. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  14. Study on Earthquake Emergency Evacuation Drill Trainer Development

    NASA Astrophysics Data System (ADS)

    ChangJiang, L.

    2016-12-01

    With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.

  15. Modeling And Economics Of Extreme Subduction Earthquakes: Two Case Studies

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Emerson, D.; Perea, N.; Moulinec, C.

    2008-05-01

    The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K

  16. Statistical aspects and risks of human-caused earthquakes

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  17. The Differences in Source Dynamics Between Intermediate-Depth and Deep EARTHQUAKES:A Comparative Study Between the 2014 Rat Islands Intermediate-Depth Earthquake and the 2015 Bonin Islands Deep Earthquake

    NASA Astrophysics Data System (ADS)

    Twardzik, C.; Ji, C.

    2015-12-01

    It has been proposed that the mechanisms for intermediate-depth and deep earthquakes might be different. While previous extensive seismological studies suggested that such potential differences do not significantly affect the scaling relationships of earthquake parameters, there has been only a few investigations regarding their dynamic characteristics, especially for fracture energy. In this work, the 2014 Mw7.9 Rat Islands intermediate-depth (105 km) earthquake and the 2015 Mw7.8 Bonin Islands deep (680 km) earthquake are studied from two different perspectives. First, their kinematic rupture models are constrained using teleseismic body waves. Our analysis reveals that the Rat Islands earthquake breaks the entire cold core of the subducting slab defined as the depth of the 650oC isotherm. The inverted stress drop is 4 MPa, compatible to that of intra-plate earthquakes at shallow depths. On the other hand, the kinematic rupture model of the Bonin Islands earthquake, which occurred in a region lacking of seismicity for the past forty years, according to the GCMT catalog, exhibits an energetic rupture within a 35 km by 30 km slip patch and a high stress drop of 24 MPa. It is of interest to note that although complex rupture patterns are allowed to match the observations, the inverted slip distributions of these two earthquakes are simple enough to be approximated as the summation of a few circular/elliptical slip patches. Thus, we investigate subsequently their dynamic rupture models. We use a simple modelling approach in which we assume that the dynamic rupture propagation obeys a slip-weakening friction law, and we describe the distribution of stress and friction on the fault as a set of elliptical patches. We will constrain the three dynamic parameters that are yield stress, background stress prior to the rupture and slip weakening distance, as well as the shape of the elliptical patches directly from teleseismic body waves observations. The study would help us

  18. Insights into the relationship between surface and subsurface activity from mechanical modeling of the 1992 Landers M7.3 earthquake

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; Pollard, D. D.

    2009-12-01

    Multi-fault, strike-slip earthquakes have proved difficult to incorporate into seismic hazard analyses due to the difficulty of determining the probability of these ruptures, despite collection of extensive data associated with such events. Modeling the mechanical behavior of these complex ruptures contributes to a better understanding of their occurrence by elucidating the relationship between surface and subsurface earthquake activity along transform faults. This insight is especially important for hazard mitigation, as multi-fault systems can produce earthquakes larger than those associated with any one fault involved. We present a linear elastic, quasi-static model of the southern portion of the 28 June 1992 Landers earthquake built in the boundary element software program Poly3D. This event did not rupture the extent of any one previously mapped fault, but trended 80km N and NW across segments of five sub-parallel, N-S and NW-SE striking faults. At M7.3, the earthquake was larger than the potential earthquakes associated with the individual faults that ruptured. The model extends from the Johnson Valley Fault, across the Landers-Kickapoo Fault, to the Homestead Valley Fault, using data associated with a six-week time period following the mainshock. It honors the complex surface deformation associated with this earthquake, which was well exposed in the desert environment and mapped extensively in the field and from aerial photos in the days immediately following the earthquake. Thus, the model incorporates the non-linearity and segmentation of the main rupture traces, the irregularity of fault slip distributions, and the associated secondary structures such as strike-slip splays and thrust faults. Interferometric Synthetic Aperture Radar (InSAR) images of the Landers event provided the first satellite images of ground deformation caused by a single seismic event and provide constraints on off-fault surface displacement in this six-week period. Insight is gained

  19. Transient triggering of near and distant earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Blanpied, M.L.; Beeler, N.M.

    1997-01-01

    We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i

  20. Continuing Megathrust Earthquake Potential in northern Chile after the 2014 Iquique Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Herman, M. W.; Barnhart, W. D.; Furlong, K. P.; Riquelme, S.; Benz, H.; Bergman, E.; Barrientos, S. E.; Earle, P. S.; Samsonov, S. V.

    2014-12-01

    The seismic gap theory, which identifies regions of elevated hazard based on a lack of recent seismicity in comparison to other portions of a fault, has successfully explained past earthquakes and is useful for qualitatively describing where future large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which until recently had not ruptured in a megathrust earthquake since a M~8.8 event in 1877. On April 1 2014, a M 8.2 earthquake occurred within this northern Chile seismic gap, offshore of the city of Iquique; the size and spatial extent of the rupture indicate it was not the earthquake that had been anticipated. Here, we present a rapid assessment of the seismotectonics of the March-April 2014 seismic sequence offshore northern Chile, including analyses of earthquake (fore- and aftershock) relocations, moment tensors, finite fault models, moment deficit calculations, and cumulative Coulomb stress transfer calculations over the duration of the sequence. This ensemble of information allows us to place the current sequence within the context of historic seismicity in the region, and to assess areas of remaining and/or elevated hazard. Our results indicate that while accumulated strain has been released for a portion of the northern Chile seismic gap, significant sections have not ruptured in almost 150 years. These observations suggest that large-to-great sized megathrust earthquakes will occur north and south of the 2014 Iquique sequence sooner than might be expected had the 2014 events ruptured the entire seismic gap.

  1. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  2. Flood Simulation Using WMS Model in Small Watershed after Strong Earthquake -A Case Study of Longxihe Watershed, Sichuan province, China

    NASA Astrophysics Data System (ADS)

    Guo, B.

    2017-12-01

    Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.

  3. A physical model for earthquakes. I - Fluctuations and interactions. II - Application to southern California

    NASA Technical Reports Server (NTRS)

    Rundle, John B.

    1988-01-01

    The idea that earthquakes represent a fluctuation about the long-term motion of plates is expressed mathematically through the fluctuation hypothesis, under which all physical quantities which pertain to the occurance of earthquakes are required to depend on the difference between the present state of slip on the fault and its long-term average. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state, and that the latter stages are stable for long intervals of time. For long enough faults, the dynamics are evidently chaotic. The methods developed are then used to construct a detailed model for earthquake dynamics in southern California. The result is a set of slip-time histories for all the major faults, which are similar to data obtained by geological trenching studies. Although there is an element of periodicity to the events, the patterns shift, change and evolve with time. Time scales for pattern evolution seem to be of the order of a thousand years for average recurring intervals of about a hundred years.

  4. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  5. Network structure, topology, and dynamics in generalized models of synchronization

    NASA Astrophysics Data System (ADS)

    Lerman, Kristina; Ghosh, Rumi

    2012-08-01

    Network structure is a product of both its topology and interactions between its nodes. We explore this claim using the paradigm of distributed synchronization in a network of coupled oscillators. As the network evolves to a global steady state, nodes synchronize in stages, revealing the network's underlying community structure. Traditional models of synchronization assume that interactions between nodes are mediated by a conservative process similar to diffusion. However, social and biological processes are often nonconservative. We propose a model of synchronization in a network of oscillators coupled via nonconservative processes. We study the dynamics of synchronization of a synthetic and real-world networks and show that the traditional and nonconservative models of synchronization reveal different structures within the same network.

  6. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  7. GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.

    2015-12-01

    The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.

  8. About Block Dynamic Model of Earthquake Source.

    NASA Astrophysics Data System (ADS)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  9. How fault geometry controls earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  10. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  11. UCERF3: A new earthquake forecast for California's complex fault system

    USGS Publications Warehouse

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  12. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    NASA Astrophysics Data System (ADS)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  13. The physics of an earthquake

    NASA Astrophysics Data System (ADS)

    McCloskey, John

    2008-03-01

    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  14. Long-period effects of the Denali earthquake on water bodies in the Puget Lowland: Observations and modeling

    USGS Publications Warehouse

    Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.

    2006-01-01

    Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.

  15. Pre-earthquake Magnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J. A.; Freund, F. T.

    2015-12-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  16. Fractal dynamics of earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bak, P.; Chen, K.

    1995-05-01

    Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function ofmore » time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).« less

  17. Development of the Elastic Rebound Strike-slip (ERS) Fault Model for Teaching Earthquake Science to Non-science Students

    NASA Astrophysics Data System (ADS)

    Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.

    2009-12-01

    The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.

  18. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    USGS Publications Warehouse

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  19. Measuring molecular parity nonconservation using nuclear-magnetic-resonance spectroscopy

    DOE PAGES

    Eills, J.; Blanchard, J. W.; Bougas, L.; ...

    2017-10-30

    Here, the weak interaction does not conserve parity and therefore induces energy shifts in chiral enantiomers that should in principle be detectable in molecular spectra. Unfortunately, the magnitude of the expected shifts are small and in spectra of a mixture of enantiomers, the energy shifts are not resolvable. We propose a nuclear-magnetic-resonance (NMR) experiment in which we titrate the chirality (enantiomeric excess) of a solvent and measure the diasteriomeric splitting in the spectra of a chiral solute in order to search for an anomalous offset due to parity nonconservation (PNC). We present a proof-of-principle experiment in which we search formore » PNC in the 13C resonances of small molecules, and use the 1H resonances, which are insensitive to PNC, as an internal reference. We set a constraint on molecular PNC in 13C chemical shifts at a level of 10 –5 ppm, and provide a discussion of important considerations in the search for molecular PNC using NMR spectroscopy.« less

  20. Measuring molecular parity nonconservation using nuclear-magnetic-resonance spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eills, J.; Blanchard, J. W.; Bougas, L.

    Here, the weak interaction does not conserve parity and therefore induces energy shifts in chiral enantiomers that should in principle be detectable in molecular spectra. Unfortunately, the magnitude of the expected shifts are small and in spectra of a mixture of enantiomers, the energy shifts are not resolvable. We propose a nuclear-magnetic-resonance (NMR) experiment in which we titrate the chirality (enantiomeric excess) of a solvent and measure the diasteriomeric splitting in the spectra of a chiral solute in order to search for an anomalous offset due to parity nonconservation (PNC). We present a proof-of-principle experiment in which we search formore » PNC in the 13C resonances of small molecules, and use the 1H resonances, which are insensitive to PNC, as an internal reference. We set a constraint on molecular PNC in 13C chemical shifts at a level of 10 –5 ppm, and provide a discussion of important considerations in the search for molecular PNC using NMR spectroscopy.« less

  1. Bi-directional volcano-earthquake interaction at Mauna Loa Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Amelung, F.

    2004-12-01

    At Mauna Loa volcano, Hawaii, large-magnitude earthquakes occur mostly at the west flank (Kona area), at the southeast flank (Hilea area), and at the east flank (Kaoiki area). Eruptions at Mauna Loa occur mostly at the summit region and along fissures at the southwest rift zone (SWRZ), or at the northeast rift zone (NERZ). Although historic earthquakes and eruptions at these zones appear to correlate in space and time, the mechanisms and implications of an eruption-earthquake interaction was not cleared. Our analysis of available factual data reveals the highly statistical significance of eruption-earthquake pairs, with a random probability of 5-to-15 percent. We clarify this correlation with the help of elastic stress-field models, where (i) we simulate earthquakes and calculate the resulting normal stress change at volcanic active zones of Mauna Loa, and (ii) we simulate intrusions in Mauna Loa and calculate the Coulomb stress change at the active fault zones. Our models suggest that Hilea earthquakes encourage dike intrusion in the SWRZ, Kona earthquakes encourage dike intrusion at the summit and in the SWRZ, and Kaoiki earthquakes encourage dike intrusion in the NERZ. Moreover, a dike in the SWRZ encourages earthquakes in the Hilea and Kona areas. A dike in the NERZ may encourage and discourage earthquakes in the Hilea and Kaoiki areas. The modeled stress change patterns coincide remarkably with the patterns of several historic eruption-earthquake pairs, clarifying the mechanisms of bi-directional volcano-earthquake interaction for Mauna Loa. The results imply that at Mauna Loa volcanic activity influences the timing and location of earthquakes, and that earthquakes influence the timing, location and the volume of eruptions. In combination with near real-time geodetic and seismic monitoring, these findings may improve volcano-tectonic risk assessment.

  2. Modelling the Time Dependence of Frequency Content of Long-period Volcanic Earthquakes

    NASA Astrophysics Data System (ADS)

    Jousset, P.; Neuberg, J. W.

    2001-12-01

    Broad-band seismic networks provide a powerfull tool for the observation and analysis of volcanic earthquakes. The amplitude spectrogram allows us to follow the frequency content of these signals with time. Observed amplitude spectrograms of long-period volcanic earthquakes display distinct spectral lines sometimes varying by several Hertz over time spans of minutes to hours. We first present several examples associated with various phases of volcanic activity at Soufrière Hills volcano, Montserrat. Then, we present and discuss two mechanisms to explain such frequency changes in the spectrograms: (i) change of physical properties within the magma and, (ii) change in the triggering frequency of repeated sources within the conduit. We use 2D and 3D finite-difference modelling methods to compute the propagation of seismic waves in simplified volcanic structures: (i) we model the gliding spectral lines by introducing continuously changing magma properties during the wavefield computation; (ii) we explore the resulting pressure distribution within the conduit and its potential role in triggering further events. We obtain constraints on both amplitude and time-scales for changes of magma properties that are required to model gliding lines in amplitude spectrograms.

  3. Crustal deformation in great California earthquake cycles

    NASA Technical Reports Server (NTRS)

    Li, Victor C.; Rice, James R.

    1986-01-01

    Periodic crustal deformation associated with repeated strike slip earthquakes is computed for the following model: A depth L (less than or similiar to H) extending downward from the Earth's surface at a transform boundary between uniform elastic lithospheric plates of thickness H is locked between earthquakes. It slips an amount consistent with remote plate velocity V sub pl after each lapse of earthquake cycle time T sub cy. Lower portions of the fault zone at the boundary slip continuously so as to maintain constant resistive shear stress. The plates are coupled at their base to a Maxwellian viscoelastic asthenosphere through which steady deep seated mantle motions, compatible with plate velocity, are transmitted to the surface plates. The coupling is described approximately through a generalized Elsasser model. It is argued that the model gives a more realistic physical description of tectonic loading, including the time dependence of deep slip and crustal stress build up throughout the earthquake cycle, than do simpler kinematic models in which loading is represented as imposed uniform dislocation slip on the fault below the locked zone.

  4. Tsunami Modeling to Validate Slip Models of the 2007 M w 8.0 Pisco Earthquake, Central Peru

    NASA Astrophysics Data System (ADS)

    Ioualalen, M.; Perfettini, H.; Condo, S. Yauri; Jimenez, C.; Tavera, H.

    2013-03-01

    Following the 2007, August 15th, M w 8.0, Pisco earthquake in central Peru, Sladen et al. (J Geophys Res 115: B02405, 2010) have derived several slip models of this event. They inverted teleseismic data together with geodetic (InSAR) measurements to look for the co-seismic slip distribution on the fault plane, considering those data sets separately or jointly. But how close to the real slip distribution are those inverted slip models? To answer this crucial question, the authors generated some tsunami records based on their slip models and compared them to DART buoys, tsunami records, and available runup data. Such an approach requires a robust and accurate tsunami model (non-linear, dispersive, accurate bathymetry and topography, etc.) otherwise the differences between the data and the model may be attributed to the slip models themselves, though they arise from an incomplete tsunami simulation. The accuracy of a numerical tsunami simulation strongly depends, among others, on two important constraints: (i) A fine computational grid (and thus the bathymetry and topography data sets used) which is not always available, unfortunately, and (ii) a realistic tsunami propagation model including dispersion. Here, we extend Sladen's work using newly available data, namely a tide gauge record at Callao (Lima harbor) and the Chilean DART buoy record, while considering a complete set of runup data along with a more realistic tsunami numerical that accounts for dispersion, and also considering a fine-resolution computational grid, which is essential. Through these accurate numerical simulations we infer that the InSAR-based model is in better agreement with the tsunami data, studying the case of the Pisco earthquake indicating that geodetic data seems essential to recover the final co-seismic slip distribution on the rupture plane. Slip models based on teleseismic data are unable to describe the observed tsunami, suggesting that a significant amount of co-seismic slip may have

  5. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  6. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  7. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as

  8. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    NASA Astrophysics Data System (ADS)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  9. A comparison study of 2006 Java earthquake and other Tsunami earthquakes

    NASA Astrophysics Data System (ADS)

    Ji, C.; Shao, G.

    2006-12-01

    We revise the slip processes of July 17 2006 Java earthquakes by combined inverting teleseismic body wave, long period surface waves, as well as the broadband records at Christmas island (XMIS), which is 220 km away from the hypocenter and so far the closest observation for a Tsunami earthquake. Comparing with the previous studies, our approach considers the amplitude variations of surface waves with source depths as well as the contribution of ScS phase, which usually has amplitudes compatible with that of direct S phase for such low angle thrust earthquakes. The fault dip angles are also refined using the Love waves observed along fault strike direction. Our results indicate that the 2006 event initiated at a depth around 12 km and unilaterally rupture southeast for 150 sec with a speed of 1.0 km/sec. The revised fault dip is only about 6 degrees, smaller than the Harvard CMT (10.5 degrees) but consistent with that of 1994 Java earthquake. The smaller fault dip results in a larger moment magnitude (Mw=7.9) for a PREM earth, though it is dependent on the velocity structure used. After verified with 3D SEM forward simulation, we compare the inverted result with the revised slip models of 1994 Java and 1992 Nicaragua earthquakes derived using the same wavelet based finite fault inversion methodology.

  10. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth

  11. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes.

    PubMed

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

  12. Earthquake Damping Device for Steel Frame

    NASA Astrophysics Data System (ADS)

    Zamri Ramli, Mohd; Delfy, Dezoura; Adnan, Azlan; Torman, Zaida

    2018-04-01

    Structures such as buildings, bridges and towers are prone to collapse when natural phenomena like earthquake occurred. Therefore, many design codes are reviewed and new technologies are introduced to resist earthquake energy especially on building to avoid collapse. The tuned mass damper is one of the earthquake reduction products introduced on structures to minimise the earthquake effect. This study aims to analyse the effectiveness of tuned mass damper by experimental works and finite element modelling. The comparisons are made between these two models under harmonic excitation. Based on the result, it is proven that installing tuned mass damper will reduce the dynamic response of the frame but only in several input frequencies. At the highest input frequency applied, the tuned mass damper failed to reduce the responses. In conclusion, in order to use a proper design of damper, detailed analysis must be carried out to have sufficient design based on the location of the structures with specific ground accelerations.

  13. Nonconservation of lepton current and asymmetry of relic neutrinos

    NASA Astrophysics Data System (ADS)

    Dvornikov, M. S.; Semikoz, V. B.

    2017-05-01

    The neutrino asymmetry, {n_v} - {n_{\\bar v}} , in the plasma of the early Universe generated both before and after the electroweak phase transition (EWPT) is calculated. It is well known that in the Standard Model the leptogenesis before the EWPT, in particular, for neutrinos, owes to the Abelian anomaly in a massless hypercharge field. At the same time, the generation of neutrino asymmetry in the Higgs phase after the EWPT has not been considered previously due to the absence of any quantum anomaly in an external electromagnetic field for such electroneutral particles as neutrinos, in contrast to the Adler anomaly for charged left- and right-handed massless electrons in the same electromagnetic field. Using the Boltzmann equation for neutrinos modified to include the Berry curvature term in momentum space, we establish a violation of the macroscopic neutrino current in the plasma after the EWPT and exactly reproduce the non-conservation of the lepton current in the symmetric phase before the EWPT that owes to the contribution of the triangle anomaly in an external hypercharge field but already without computing the corresponding Feynman diagrams. We apply the new kinetic equation to calculate the neutrino asymmetry by taking into account the Berry curvature and the electroweak interaction with plasma particles in the Higgs phase, including that after the neutrino decoupling in the absence of their collisions in the plasma. We find that this asymmetry is too small for observations. Thus, a difference between the relic neutrino and antineutrino densities, if it exists, must appear already in the symmetric phase of the early Universe before the EWPT.

  14. Pre-Earthquake Unipolar Electromagnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Freund, F.

    2013-12-01

    Transient ultralow frequency (ULF) electromagnetic (EM) emissions have been reported to occur before earthquakes [1,2]. They suggest powerful transient electric currents flowing deep in the crust [3,4]. Prior to the M=5.4 Alum Rock earthquake of Oct. 21, 2007 in California a QuakeFinder triaxial search-coil magnetometer located about 2 km from the epicenter recorded unusual unipolar pulses with the approximate shape of a half-cycle of a sine wave, reaching amplitudes up to 30 nT. The number of these unipolar pulses increased as the day of the earthquake approached. These pulses clearly originated around the hypocenter. The same pulses have since been recorded prior to several medium to moderate earthquakes in Peru, where they have been used to triangulate the location of the impending earthquakes [5]. To understand the mechanism of the unipolar pulses, we first have to address the question how single current pulses can be generated deep in the Earth's crust. Key to this question appears to be the break-up of peroxy defects in the rocks in the hypocenter as a result of the increase in tectonic stresses prior to an earthquake. We investigate the mechanism of the unipolar pulses by coupling the drift-diffusion model of semiconductor theory to Maxwell's equations, thereby producing a model describing the rock volume that generates the pulses in terms of electromagnetism and semiconductor physics. The system of equations is then solved numerically to explore the electromagnetic radiation associated with drift-diffusion currents of electron-hole pairs. [1] Sharma, A. K., P. A. V., and R. N. Haridas (2011), Investigation of ULF magnetic anomaly before moderate earthquakes, Exploration Geophysics 43, 36-46. [2] Hayakawa, M., Y. Hobara, K. Ohta, and K. Hattori (2011), The ultra-low-frequency magnetic disturbances associated with earthquakes, Earthquake Science, 24, 523-534. [3] Bortnik, J., T. E. Bleier, C. Dunson, and F. Freund (2010), Estimating the seismotelluric current

  15. Sedimentary Signatures of Submarine Earthquakes: Deciphering the Extent of Sediment Remobilization from the 2011 Tohoku Earthquake and Tsunami and 2010 Haiti Earthquake

    NASA Astrophysics Data System (ADS)

    McHugh, C. M.; Seeber, L.; Moernaut, J.; Strasser, M.; Kanamatsu, T.; Ikehara, K.; Bopp, R.; Mustaque, S.; Usami, K.; Schwestermann, T.; Kioka, A.; Moore, L. M.

    2017-12-01

    The 2004 Sumatra-Andaman Mw9.3 and the 2011 Tohoku (Japan) Mw9.0 earthquakes and tsunamis were huge geological events with major societal consequences. Both were along subduction boundaries and ruptured portions of these boundaries that had been deemed incapable of such events. Submarine strike-slip earthquakes, such as the 2010 Mw7.0 in Haiti, are smaller but may be closer to population centers and can be similarly catastrophic. Both classes of earthquakes remobilize sediment and leave distinct signatures in the geologic record by a wide range of processes that depends on both environment and earthquake characteristics. Understanding them has the potential of greatly expanding the record of past earthquakes, which is critical for geohazard analysis. Recent events offer precious ground truth about the earthquakes and short-lived radioisotopes offer invaluable tools to identify sediments they remobilized. In the 2011 Mw9 Japan earthquake they document the spatial extent of remobilized sediment from water depths of 626m in the forearc slope to trench depths of 8000m. Subbottom profiles, multibeam bathymetry and 40 piston cores collected by the R/V Natsushima and R/V Sonne expeditions to the Japan Trench document multiple turbidites and high-density flows. Core tops enriched in xs210Pb,137Cs and 134Cs reveal sediment deposited by the 2011 Tohoku earthquake and tsunami. The thickest deposits (2m) were documented on a mid-slope terrace and trench (4000-8000m). Sediment was deposited on some terraces (600-3000m), but shed from the steep forearc slope (3000-4000m). The 2010 Haiti mainshock ruptured along the southern flank of Canal du Sud and triggered multiple nearshore sediment failures, generated turbidity currents and stirred fine sediment into suspension throughout this basin. A tsunami was modeled to stem from both sediment failures and tectonics. Remobilized sediment was tracked with short-lived radioisotopes from the nearshore, slope, in fault basins including the

  16. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-03-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  17. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-04-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  18. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  19. InSAR Analysis of the 2011 Hawthorne (Nevada) Earthquake Swarm: Implications of Earthquake Migration and Stress Transfer

    NASA Astrophysics Data System (ADS)

    Zha, X.; Dai, Z.; Lu, Z.

    2015-12-01

    The 2011 Hawthorne earthquake swarm occurred in the central Walker Lane zone, neighboring the border between California and Nevada. The swarm included an Mw 4.4 on April 13, Mw 4.6 on April 17, and Mw 3.9 on April 27. Due to the lack of the near-field seismic instrument, it is difficult to get the accurate source information from the seismic data for these moderate-magnitude events. ENVISAT InSAR observations captured the deformation mainly caused by three events during the 2011 Hawthorne earthquake swarm. The surface traces of three seismogenic sources could be identified according to the local topography and interferogram phase discontinuities. The epicenters could be determined using the interferograms and the relocated earthquake distribution. An apparent earthquake migration is revealed by InSAR observations and the earthquake distribution. Analysis and modeling of InSAR data show that three moderate magnitude earthquakes were produced by slip on three previously unrecognized faults in the central Walker Lane. Two seismogenic sources are northwest striking, right-lateral strike-slip faults with some thrust-slip components, and the other source is a northeast striking, thrust-slip fault with some strike-slip components. The former two faults are roughly parallel to each other, and almost perpendicular to the latter one. This special spatial correlation between three seismogenic faults and nature of seismogenic faults suggest the central Walker Lane has been undergoing southeast-northwest horizontal compressive deformation, consistent with the region crustal movement revealed by GPS measurement. The Coulomb failure stresses on the fault planes were calculated using the preferred slip model and the Coulomb 3.4 software package. For the Mw4.6 earthquake, the Coulomb stress change caused by the Mw4.4 event increased by ~0.1 bar. For the Mw3.9 event, the Coulomb stress change caused by the Mw4.6 earthquake increased by ~1.0 bar. This indicates that the preceding

  20. Use of GPS and InSAR Technology and its Further Development in Earthquake Modeling

    NASA Technical Reports Server (NTRS)

    Donnellan, A.; Lyzenga, G.; Argus, D.; Peltzer, G.; Parker, J.; Webb, F.; Heflin, M.; Zumberge, J.

    1999-01-01

    Global Positioning System (GPS) data are useful for understanding both interseismic and postseismic deformation. Models of GPS data suggest that the lower crust, lateral heterogeneity, and fault slip, all provide a role in the earthquake cycle.

  1. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  2. Continuous forearc extension following the 2010 Maule megathrust earthquake: InSAR and seismic observations and modelling

    NASA Astrophysics Data System (ADS)

    Bie, L.; Rietbrock, A.; Agurto-Detzel, H.

    2017-12-01

    The forearc region in subduction zones deforms in response to relative movement on the plate interface throughout the earthquake cycle. Megathrust earthquakes may alter the stress field in the forearc areas from compression to extension, resulting in normal faulting earthquakes. Recent cases include the 2011 Iwaki sequence following the Tohoku-Oki earthquake in Japan, and 2010 Pichilemu sequence after the Maule earthquake in central Chile. Given the closeness of these normal fault events to residential areas, and their shallow depth, they may pose equivalent, if not higher, seismic risk in comparison to earthquakes on the megathrust. Here, we focus on the 2010 Pichilemu sequence following the Mw 8.8 Maule earthquake in central Chile, where the Nazca Plate subducts beneath the South American Plate. Previous studies have clearly delineated the Pichilemu normal fault structure. However, it is not clear whether the Pichilemu events fully released the extensional stress exerted by the Maule mainshock, or the forearc area is still controlled by extensional stress. A 3 months displacement time-series, constructed by radar satellite images, clearly shows continuous aseismic deformation along the Pichilemu fault. Kinematic inversion reveals peak afterslip of 25 cm at shallow depth, equivalent to a Mw 5.4 earthquake. We identified a Mw 5.3 earthquake 2 months after the Pichilemu sequence from both geodetic and seismic observations. Nonlinear inversion from geodetic data suggests that this event ruptured a normal fault conjugate to the Pichilemu fault, at a depth of 4.5 km, consistent with the result obtained from independent moment tensor inversion. We relocated aftershocks in the Pichilemu area using relative arrivals time and a 3D velocity model. The spatial correlation between geodetic deformation and aftershocks reveals three additional areas which may have experienced aseismic slip at depth. Both geodetic displacement and aftershock distribution show a conjugated L

  3. Regional and Local Glacial-Earthquake Patterns in Greenland

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2016-12-01

    Icebergs calved from marine-terminating glaciers currently account for up to half of the 400 Gt of ice lost annually from the Greenland ice sheet (Enderlin et al., 2014). When large capsizing icebergs ( 1 Gt of ice) calve, they produce elastic waves that propagate through the solid earth and are observed as teleseismically detectable MSW 5 glacial earthquakes (e.g., Ekström et al., 2003; Nettles & Ekström, 2010 Tsai & Ekström, 2007; Veitch & Nettles, 2012). The annual number of these events has increased dramatically over the past two decades. We analyze glacial earthquakes from 2011-2013, which expands the glacial-earthquake catalog by 50%. The number of glacial-earthquake solutions now available allows us to investigate regional patterns across Greenland and link earthquake characteristics to changes in ice dynamics at individual glaciers. During the years of our study Greenland's west coast dominated glacial-earthquake production. Kong Oscar Glacier, Upernavik Isstrøm, and Jakobshavn Isbræ all produced more glacial earthquakes during this time than in preceding years. We link patterns in glacial-earthquake production and cessation to the presence or absence of floating ice tongues at glaciers on both coasts of Greenland. The calving model predicts glacial-earthquake force azimuths oriented perpendicular to the calving front, and comparisons between seismic data and satellite imagery confirm this in most instances. At two glaciers we document force azimuths that have recently changed orientation and confirm that similar changes have occurred in the calving-front geometry. We also document glacial earthquakes at one previously quiescent glacier. Consistent with previous work, we model the glacial-earthquake force-time function as a boxcar with horizontal and vertical force components that vary synchronously. We investigate limitations of this approach and explore improvements that could lead to a more accurate representation of the glacial earthquake source.

  4. Evidence for non-conservative current-induced forces in the breaking of Au and Pt atomic chains.

    PubMed

    Sabater, Carlos; Untiedt, Carlos; van Ruitenbeek, Jan M

    2015-01-01

    This experimental work aims at probing current-induced forces at the atomic scale. Specifically it addresses predictions in recent work regarding the appearance of run-away modes as a result of a combined effect of the non-conservative wind force and a 'Berry force'. The systems we consider here are atomic chains of Au and Pt atoms, for which we investigate the distribution of break down voltage values. We observe two distinct modes of breaking for Au atomic chains. The breaking at high voltage appears to behave as expected for regular break down by thermal excitation due to Joule heating. However, there is a low-voltage breaking mode that has characteristics expected for the mechanism of current-induced forces. Although a full comparison would require more detailed information on the individual atomic configurations, the systems we consider are very similar to those considered in recent model calculations and the comparison between experiment and theory is very encouraging for the interpretation we propose.

  5. Evidence for non-conservative current-induced forces in the breaking of Au and Pt atomic chains

    PubMed Central

    Sabater, Carlos; Untiedt, Carlos

    2015-01-01

    Summary This experimental work aims at probing current-induced forces at the atomic scale. Specifically it addresses predictions in recent work regarding the appearance of run-away modes as a result of a combined effect of the non-conservative wind force and a ‘Berry force’. The systems we consider here are atomic chains of Au and Pt atoms, for which we investigate the distribution of break down voltage values. We observe two distinct modes of breaking for Au atomic chains. The breaking at high voltage appears to behave as expected for regular break down by thermal excitation due to Joule heating. However, there is a low-voltage breaking mode that has characteristics expected for the mechanism of current-induced forces. Although a full comparison would require more detailed information on the individual atomic configurations, the systems we consider are very similar to those considered in recent model calculations and the comparison between experiment and theory is very encouraging for the interpretation we propose. PMID:26734525

  6. Seismotectonic Models of the Three Recent Devastating SCR Earthquakes in India

    NASA Astrophysics Data System (ADS)

    Mooney, W. D.; Kayal, J.

    2007-12-01

    During the last decade, three devastating earthquakes, the Killari 1993 (Mb 6.3), Jabalpur 1997 (Mb 6.0) and the Bhuj 2001 (Mw 7.7) occurred in the Stable Continental Region (SCR), Peninsular India. First, the September 30, 1993 Killari earthquake (Mb 6.3) occurred in the Deccan province of central India, in the Latur district of Maharashtra state. The local geology in the area is obscured by the late Cretaceous-Eocene basalt flows, referred to as the Deccan traps. This makes it difficult to recognize the geological surface faults that could be associated with the Killari earthquake. The epicentre was reported at 18.090N and 76.620E, and the focal depth at 7 +/- 1 km was precisely estimated by waveform inversion (Chen and Kao, 1995). The maximum intensity reached to VIII and the earthquake caused a loss of about 10,000 lives and severe damage to property. The May 22, 1997 Jabalpur earthquake (Mb 6.0), epicentre at 23.080N and 80.060E, is a well studied earthquake in the Son-Narmada-Tapti (SONATA) seismic zone. A notable aspects of this earthquake is that it was the first significant event in India to be recorded by 10 broadband seismic stations which were established in 1996 by the India Meteorological Department (IMD). The focal depth was well estimated using the "converted phases" of the broadband seismograms. The focal depth was given in the lower crust at a depth of 35 +/- 1 km, similar to the moderate earthquakes reported from the Amazona ancient rift system in SCR of South America. Maximum MSK intensity of the Jabalpur earthquake reached to VIII in the MSK scale and this earthquake killed about 50 people in the Jabalpur area. Finally, the Bhuj earthquake (MW 7.7) of January 26, 2001 in the Gujarat state, northwestern India, was felt across the whole country, and killed about 20,000 people. The maximum intensity level reached X. The epicenter of the earthquake is reported at 23.400N and 70.280E, and the well estimated focal depth at 25 km. A total of about

  7. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  8. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  9. A dynamic model for slab development associated with the 2015 Mw 7.9 Bonin Islands deep earthquak

    NASA Astrophysics Data System (ADS)

    Zhan, Z.; Yang, T.; Gurnis, M.

    2016-12-01

    The 680 km deep May 30, 2015 Mw 7.9 Bonin Islands earthquake is isolated from the nearest earthquakes by more than 150 km. The geodynamic context leading to this isolated deep event is unclear. Tomographic models and seismicity indicate that the morphology of the west-dipping Pacific slab changes rapidly along the strike of the Izu-Bonin-Mariana trench. To the north, the Izu-Bonin section of the Pacific slab lies horizontally above the 660 km discontinuity and extends more than 500 km westward. Several degrees south, the Mariana section dips vertically and penetrates directly into the lower mantle. The observed slab morphology is consistent with plate reconstructions suggesting that the northern section of the IBM trench retreated rapidly since the late Eocene while the southern section of the IBM trench was relatively stable during the same period. We suggest that the location of the isolated 2015 Bonin Islands deep earthquake can be explained by the buckling of the Pacific slab beneath the Bonin Islands. We use geodynamic models to investigate the slab morphology, temperature and stress regimes under different trench motion histories. Models confirm previous results that the slab often lies horizontally within the transition zone when the trench retreats, but buckles when the trench position becomes fixed with respect to the lower mantle. We show that a slab-buckling model is consistent with the observed deep earthquake P-axis directions (assumed to be the axis of principal compressional stress) regionally. The influences of various physical parameters on slab morphology, temperature and stress regime are investigated. In the models investigated, the horizontal width of the buckled slab is no more than 400 km.

  10. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    USGS Publications Warehouse

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.

  11. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.

    2017-06-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques. I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called ;moment deficit,; calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of ;earthquake super-cycles; observed in some global subduction zones.

  12. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    PubMed

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  13. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  14. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and

  15. Joint Far-field and Near-field GPS Observations to Modified the Fault Slip Models of 2011 Tohoku-Oki Earthquake (Mw 9.0)

    NASA Astrophysics Data System (ADS)

    Yang, J.; Yi, S.; Sun, W.

    2016-12-01

    Signification displacements caused by the 2011 Tohoku-Oki earthquake (Mw9.0) can be detected by GPS observations on the north and northeast of Asian continent which comes from Crustal Movement Observation Network of China (CMONOC). Obviously horizontal displacement which can be detected with many GPS stations reaches to almost 3cm and 2cm and most of those extend eastward pointing to the epicenter of this earthquake. Those data can be acquired rapidly after the earthquake from CMONOC. Here, we will discuss how to calculate the seismic moment with those far-field GPS observations. The far field displacement can constrain the pattern of finite slip model and seismic moment using spherically stratified Earth model (PREM). We give a general rule of thumb to show how far-field GPS observations are affected by the earthquake parameters. In the worldwide, after 1990 there are 27 large earthquakes (the magnitude more than Mw 8.0) which most are subduction types with low rake angle. Their far-field GPS observations are mainly controlled by the component of Y22. Far-field GPS observations are potential to constrain one or two components of the focal mechanisms. When we joint far-field and near-field GPS data to get the 2011 Tohoku-Oki earthquake, we can get a more accurately finite slip model. The article shows a new mothed using far-field GPS data to constrain the fault slip model.

  16. Space Earthquake Perturbation Simulation (SEPS) an application based on Geant4 tools to model and simulate the interaction between the Earthquake and the particle trapped on the Van Allen belt

    NASA Astrophysics Data System (ADS)

    Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu

    2014-05-01

    During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).

  17. Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models

    USGS Publications Warehouse

    Geist, Eric L.

    2016-01-01

    Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.

  18. Centrality in earthquake multiplex networks

    NASA Astrophysics Data System (ADS)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  19. A crack-like rupture model for the 19 September 1985 Michoacan, Mexico, earthquake

    NASA Astrophysics Data System (ADS)

    Ruppert, Stanley D.; Yomogida, Kiyoshi

    1992-09-01

    Evidence supporting a smooth crack-like rupture process of the Michoacan earthquake of 1985 is obtained from a major earthquake for the first time. Digital strong motion data from three stations (Caleta de Campos, La Villita, and La Union), recording near-field radiation from the fault, show unusually simple ramped displacements and permanent offsets previously only seen in theoretical models. The recording of low frequency (0 to 1 Hz) near-field waves together with the apparently smooth rupture favors a crack-like model to a step or Haskell-type dislocation model under the constraint of the slip distribution obtained by previous studies. A crack-like rupture, characterized by an approximated dynamic slip function and systematic decrease in slip duration away from the point of rupture nucleation, produces the best fit to the simple ramped displacements observed. Spatially varying rupture duration controls several important aspects of the synthetic seismograms, including the variation in displacement rise times between components of motion observed at Caleta de Campos. Ground motion observed at Caleta de Campos can be explained remarkably well with a smoothly propagating crack model. However, data from La Villita and La Union suggest a more complex rupture process than the simple crack-like model for the south-eastern portion of the fault.

  20. Volcano-earthquake interaction at Mauna Loa volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, Thomas R.; Amelung, Falk

    2006-05-01

    The activity at Mauna Loa volcano, Hawaii, is characterized by eruptive fissures that propagate into the Southwest Rift Zone (SWRZ) or into the Northeast Rift Zone (NERZ) and by large earthquakes at the basal decollement fault. In this paper we examine the historic eruption and earthquake catalogues, and we test the hypothesis that the events are interconnected in time and space. Earthquakes in the Kaoiki area occur in sequence with eruptions from the NERZ, and earthquakes in the Kona and Hilea areas occur in sequence with eruptions from the SWRZ. Using three-dimensional numerical models, we demonstrate that elastic stress transfer can explain the observed volcano-earthquake interaction. We examine stress changes due to typical intrusions and earthquakes. We find that intrusions change the Coulomb failure stress along the decollement fault so that NERZ intrusions encourage Kaoiki earthquakes and SWRZ intrusions encourage Kona and Hilea earthquakes. On the other hand, earthquakes decompress the magma chamber and unclamp part of the Mauna Loa rift zone, i.e., Kaoiki earthquakes encourage NERZ intrusions, whereas Kona and Hilea earthquakes encourage SWRZ intrusions. We discuss how changes of the static stress field affect the occurrence of earthquakes as well as the occurrence, location, and volume of dikes and of associated eruptions and also the lava composition and fumarolic activity.

  1. The GED4GEM project: development of a Global Exposure Database for the Global Earthquake Model initiative

    USGS Publications Warehouse

    Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.

    2012-01-01

    In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.

  2. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  3. Induced seismicity provides insight into why earthquake ruptures stop.

    PubMed

    Galis, Martin; Ampuero, Jean Paul; Mai, P Martin; Cappa, Frédéric

    2017-12-01

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.

  4. Induced seismicity provides insight into why earthquake ruptures stop

    PubMed Central

    Galis, Martin; Ampuero, Jean Paul; Mai, P. Martin; Cappa, Frédéric

    2017-01-01

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures. PMID:29291250

  5. Finite-fault slip model of the 2016 Mw 7.5 Chiloé earthquake, southern Chile, estimated from Sentinel-1 data

    NASA Astrophysics Data System (ADS)

    Xu, Wenbin

    2017-05-01

    Subduction earthquakes have been widely studied in the Chilean subduction zone, but earthquakes occurring in its southern part have attracted less research interest primarily due to its lower rate of seismic activity. Here I use Sentinel-1 interferometric synthetic aperture radar (InSAR) data and range offset measurements to generate coseismic crustal deformation maps of the 2016 Mw 7.5 Chiloé earthquake in southern Chile. I find a concentrated crustal deformation with ground displacement of approximately 50 cm in the southern part of the Chiloé island. The best fitting fault model shows a pure thrust-fault motion on a shallow dipping plane orienting 4° NNE. The InSAR-determined moment is 2.4 × 1020 Nm with a shear modulus of 30 GPa, equivalent to Mw 7.56, which is slightly lower than the seismic moment. The model shows that the slip did not reach the trench, and it reruptured part of the fault that ruptured in the 1960 Mw 9.5 earthquake. The 2016 event has only released a small portion of the accumulated strain energy on the 1960 rupture zone, suggesting that the seismic hazard of future great earthquakes in southern Chile is high.

  6. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    NASA Astrophysics Data System (ADS)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  7. Stress triggering of the 1999 Hector Mine earthquake by transient deformation following the 1992 Landers earthquake

    USGS Publications Warehouse

    Pollitz, F.F.; Sacks, I.S.

    2002-01-01

    The M 7.3 June 28, 1992 Landers and M 7.1 October 16, 1999 Hector Mine earthquakes, California, both right lateral strike-slip events on NNW-trending subvertical faults, occurred in close proximity in space and time in a region where recurrence times for surface-rupturing earthquakes are thousands of years. This suggests a causal role for the Landers earthquake in triggering the Hector Mine earthquake. Previous modeling of the static stress change associated with the Landers earthquake shows that the area of peak Hector Mine slip lies where the Coulomb failure stress promoting right-lateral strike-slip failure was high, but the nucleation point of the Hector Mine rupture was neutrally to weakly promoted, depending on the assumed coefficient of friction. Possible explanations that could account for the 7-year delay between the two ruptures include background tectonic stressing, dissipation of fluid pressure gradients, rate- and state-dependent friction effects, and post-Landers viscoelastic relaxation of the lower crust and upper mantle. By employing a viscoelastic model calibrated by geodetic data collected during the time period between the Landers and Hector Mine events, we calculate that postseismic relaxation produced a transient increase in Coulomb failure stress of about 0.7 bars on the impending Hector Mine rupture surface. The increase is greatest over the broad surface that includes the 1999 nucleation point and the site of peak slip further north. Since stress changes of magnitude greater than or equal to 0.1 bar are associated with documented causal fault interactions elsewhere, viscoelastic relaxation likely contributed to the triggering of the Hector Mine earthquake. This interpretation relies on the assumption that the faults occupying the central Mojave Desert (i.e., both the Landers and Hector Mine rupturing faults) were critically stressed just prior to the Landers earthquake.

  8. Analysis of the Seismicity Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2016-12-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  9. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictable earthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

  10. Source model of an earthquake doublet that occurred in a pull-apart basin along the Sumatran fault, Indonesia

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Toda, S.; Ando, R.; Yamashina, T.; Inoue, H.; Sunarjo

    2010-04-01

    On 2007 March 6, an earthquake doublet occurred along the Sumatran fault, Indonesia. The epicentres were located near Padang Panjang, central Sumatra, Indonesia. The first earthquake, with a moment magnitude (Mw) of 6.4, occurred at 03:49 UTC and was followed two hours later (05:49 UTC) by an earthquake of similar size (Mw = 6.3). We studied the earthquake doublet by a waveform inversion analysis using data from a broadband seismograph network in Indonesia (JISNET). The focal mechanisms of the two earthquakes indicate almost identical right-lateral strike-slip faults, consistent with the geometry of the Sumatran fault. Both earthquakes nucleated below the northern end of Lake Singkarak, which is in a pull-apart basin between the Sumani and Sianok segments of the Sumatran fault system, but the earthquakes ruptured different fault segments. The first earthquake occurred along the southern Sumani segment and its rupture propagated southeastward, whereas the second one ruptured the northern Sianok segment northwestward. Along these fault segments, earthquake doublets, in which the two adjacent fault segments rupture one after the other, have occurred repeatedly. We investigated the state of stress at a segment boundary of a fault system based on the Coulomb stress changes. The stress on faults increases during interseismic periods and is released by faulting. At a segment boundary, on the other hand, the stress increases both interseismically and coseismically, and may not be released unless new fractures are created. Accordingly, ruptures may tend to initiate at a pull-apart basin. When an earthquake occurs on one of the fault segments, the stress increases coseismically around the basin. The stress changes caused by that earthquake may trigger a rupture on the other segment after a short time interval. We also examined the mechanism of the delayed rupture based on a theory of a fluid-saturated poroelastic medium and dynamic rupture simulations incorporating a

  11. Global Instrumental Seismic Catalog: earthquake relocations for 1900-present

    NASA Astrophysics Data System (ADS)

    Villasenor, A.; Engdahl, E.; Storchak, D. A.; Bondar, I.

    2010-12-01

    We present the current status of our efforts to produce a set of homogeneous earthquake locations and improved focal depths towards the compilation of a Global Catalog of instrumentally recorded earthquakes that will be complete down to the lowest magnitude threshold possible on a global scale and for the time period considered. This project is currently being carried out under the auspices of GEM (Global Earthquake Model). The resulting earthquake catalog will be a fundamental dataset not only for earthquake risk modeling and assessment on a global scale, but also for a large number of studies such as global and regional seismotectonics; the rupture zones and return time of large, damaging earthquakes; the spatial-temporal pattern of moment release along seismic zones and faults etc. Our current goal is to re-locate all earthquakes with available station arrival data using the following magnitude thresholds: M5.5 for 1964-present, M6.25 for 1918-1963, M7.5 (complemented with significant events in continental regions) for 1900-1917. Phase arrival time data for earthquakes after 1963 are available in digital form from the International Seismological Centre (ISC). For earthquakes in the time period 1918-1963, phase data is obtained by scanning the printed International Seismological Summary (ISS) bulletins and applying optical character recognition routines. For earlier earthquakes we will collect phase data from individual station bulletins. We will illustrate some of the most significant results of this relocation effort, including aftershock distributions for large earthquakes, systematic differences in epicenter and depth with respect to previous location, examples of grossly mislocated events, etc.

  12. Simulating Earthquakes for Science and Society: New Earthquake Visualizations Ideal for Use in Science Communication

    NASA Astrophysics Data System (ADS)

    de Groot, R. M.; Benthien, M. L.

    2006-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  13. Lisbon 1755, a multiple-rupture earthquake

    NASA Astrophysics Data System (ADS)

    Fonseca, J. F. B. D.

    2017-12-01

    The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the

  14. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  15. Numerical models of pore pressure and stress changes along basement faults due to wastewater injection: Applications to the 2014 Milan, Kansas Earthquake

    USGS Publications Warehouse

    Hearn, Elizabeth H.; Koltermann, Christine; Rubinstein, Justin R.

    2018-01-01

    We have developed groundwater flow models to explore the possible relationship between wastewater injection and the 12 November 2014 Mw 4.8 Milan, Kansas earthquake. We calculate pore pressure increases in the uppermost crust using a suite of models in which hydraulic properties of the Arbuckle Formation and the Milan earthquake fault zone, the Milan earthquake hypocenter depth, and fault zone geometry are varied. Given pre‐earthquake injection volumes and reasonable hydrogeologic properties, significantly increasing pore pressure at the Milan hypocenter requires that most flow occur through a conductive channel (i.e., the lower Arbuckle and the fault zone) rather than a conductive 3‐D volume. For a range of reasonable lower Arbuckle and fault zone hydraulic parameters, the modeled pore pressure increase at the Milan hypocenter exceeds a minimum triggering threshold of 0.01 MPa at the time of the earthquake. Critical factors include injection into the base of the Arbuckle Formation and proximity of the injection point to a narrow fault damage zone or conductive fracture in the pre‐Cambrian basement with a hydraulic diffusivity of about 3–30 m2/s. The maximum pore pressure increase we obtain at the Milan hypocenter before the earthquake is 0.06 MPa. This suggests that the Milan earthquake occurred on a fault segment that was critically stressed prior to significant wastewater injection in the area. Given continued wastewater injection into the upper Arbuckle in the Milan region, assessment of the middle Arbuckle as a hydraulic barrier remains an important research priority.

  16. Differential energy radiation from two earthquakes in Japan with identical Mw: The Kyushu 1996 and Tottori 2000 earthquakes

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2009-01-01

    We examine two closely located earthquakes in Japan that had identical moment magnitudes Mw but significantly different energy magnitudes Me. We use teleseismic data from the Global Seismograph Network and strong-motion data from the National Research Institute for Earth Science and Disaster Prevention's K-Net to analyze the 19 October 1996 Kyushu earthquake (Mw 6.7, Me 6.6) and the 6 October 2000 Tottori earthquake (Mw 6.7, Me 7.4). To obtain regional estimates of radiated energy ES we apply a spectral technique to regional (<200 km) waveforms that are dominated by S and Lg waves. For the thrust-fault Kyushu earthquake, we estimate an average regional attenuation Q(f) 230f0:65. For the strike-slip Tottori earthquake, the average regional attenuation is Q(f) 180f0:6. These attenuation functions are similar to those derived from studies of both California and Japan earthquakes. The regional estimate of ES for the Kyushu earthquake, 3:8 ?? 1014 J, is significantly smaller than that for the Tottori earthquake, ES 1:3 ?? 1015 J. These estimates correspond well with the teleseismic estimates of 3:9 ?? 1014 J and 1:8 ?? 1015 J, respectively. The apparent stress (Ta = ??Es/M0 with ?? equal to rigidity) for the Kyushu earthquake is 4 times smaller than the apparent stress for the Tottori earthquake. In terms of the fault maturity model, the significantly greater release of energy by the strike-slip Tottori earthquake can be related to strong deformation in an immature intraplate setting. The relatively lower energy release of the thrust-fault Kyushu earthquake can be related to rupture on mature faults at a subduction environment. The consistence between teleseismic and regional estimates of ES is particularly significant as teleseismic data for computing ES are routinely available for all large earthquakes whereas often there are no near-field data.

  17. Focal mechanisms of earthquakes in Mongolia

    NASA Astrophysics Data System (ADS)

    Sodnomsambuu, D.; Natalia, R.; Gangaadorj, B.; Munkhuu, U.; Davaasuren, G.; Danzansan, E.; Yan, R.; Valentina, M.; Battsetseg, B.

    2011-12-01

    Focal mechanism data provide information on the relative magnitudes of the principal stresses, so that a tectonic regime can be assigned. Especially such information is useful for the study of intraplate seismic active regions. A study of earthquake focal mechanisms in the territory of Mongolia as landlocked and intraplate region was conducted. We present map of focal mechanisms of earthquakes with M4.5 which occurred in Mongolia and neighboring regions. Focal mechanisms solutions were constrained by the first motion solutions, as well as by waveform modeling, particularly CMT solutions. Four earthquakes have been recorded in Mongolia in XX century with magnitude more than 8, the 1905 M7.9 Tsetserleg and M8.4 Bolnai earthquakes, the 1931 M8.0 Fu Yun earthquake, the 1957 M8.1 Gobi-Altai earthquake. However the map of focal mechanisms of earthquakes in Mongolia allows seeing all seismic active structures: Gobi Altay, Mongolian Altay, active fringe of Hangay dome, Hentii range etc. Earthquakes in the most of Mongolian territory and neighboring China regions are characterized by strike-slip and reverse movements. Strike-slip movements also are typical for earthquakes in Altay Range in Russia. The north of Mongolia and south part of the Baikal area is a region where have been occurred earthquakes with different focal mechanisms. This region is a zone of the transition between compressive regime associated to India-Eurasian collision and extensive structures localized in north of the country as Huvsgul area and Baykal rift. Earthquakes in the Baikal basin itself are characterized by normal movements. Earthquakes in Trans-Baikal zone and NW of Mongolia are characterized dominantly by strike-slip movements. Analysis of stress-axis orientations, the tectonic stress tensor is presented. The map of focal mechanisms of earthquakes in Mongolia could be useful tool for researchers in their study on Geodynamics of Central Asia, particularly of Mongolian and Baikal regions.

  18. Coseismic and postseismic slip distribution of the 2003 Mw = 6.5 Chengkung earthquake in eastern Taiwan: Elastic modeling from inversion of GPS data

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Wei; Lee, Jian-Cheng; Hu, Jyr-Ching; Chen, Horng-Yue

    2009-03-01

    The Chengkung earthquake with ML = 6.6 occurred in eastern Taiwan at 12:38 local time on December 10th 2003. Based on the main shock relocation and aftershock distribution, the Chengkung earthquake occurred along the previously recognized N20°E trending Chihshang fault. This event did not cause human loss, but significant cracks developed at the ground surface and damaged some buildings. After 1951 Taitung earthquake, there was no larger ML > 6 earthquake occurred in this region until the Chengkung earthquake. As a result, the Chengkung earthquake is a good opportunity to study the seismogenic structure of the Chihshang fault. The coseismic displacements recorded by GPS show a fan-shaped distribution with maximal displacement of about 30 cm near the epicenter. The aftershocks of the Chengkung earthquake revealing an apparent linear distribution helps us to construct the clear fault geometry of the Chihshang fault. In this study, we employ a half-space angular elastic dislocation model with GPS observations to figure out the slip distribution and seismological behavior of the Chengkung earthquake on the Chihshang fault. The elastic half-space dislocation model reveals that the Chengkung earthquake is a thrust event with minor left-lateral strike-slip component. The maximum coseismic slip is located around the depth of 20 km and up to 1.1 m. The slips are gradually decreased to less than 10 cm near the surface part of the Chihshang fault. The seismogenic fault plane, which is constructed by the delineation of the aftershocks, demonstrates that the Chihshang fault is a high-angle fault. However the fault plane changes to a flat plane at depth of 20 km. In addition, a significant part of the measured deformation across the surface fault zone for this earthquake can be attributed to postseismic creep. The postseismic elastic dislocation model shows that most afterslips are distributed to the upper level of the Chihshang fault. And most afterslips consist of both of dip

  19. Simulating Earthquakes for Science and Society: Earthquake Visualizations Ideal for use in Science Communication and Education

    NASA Astrophysics Data System (ADS)

    de Groot, R.

    2008-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently gained visibility via television news coverage in Southern California. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin & Brick, 2002). For example, The Great Southern California ShakeOut was based on a potential magnitude 7.8 earthquake on the southern San Andreas fault. The visualization created for the ShakeOut was a key scientific and communication tool for the earthquake drill. This presentation will also feature SCEC Virtual Display of Objects visualization software developed by SCEC Undergraduate Studies in Earthquake Information Technology interns. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  20. Fractals and Forecasting in Earthquakes and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  1. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    NASA Astrophysics Data System (ADS)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  2. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  3. Focal Depth of the WenChuan Earthquake Aftershocks from modeling of Seismic Depth Phases

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Zeng, X.; Chong, J.; Ni, S.; Chen, Y.

    2008-12-01

    After the 05/12/2008 great WenChuan earthquake in Sichuan Province of China, tens of thousands earthquakes occurred with hundreds of them stronger than M4. Those aftershocks provide valuable information about seismotectonics and rupture processes for the mainshock, particularly accurate spatial distribution of aftershocks is very informational for determining rupture fault planes. However focal depth can not be well resolved just with first arrivals recorded by relatively sparse network in Sichuan Province, therefore 3D seismicity distribution is difficult to obtain though horizontal location can be located with accuracy of 5km. Instead local/regional depth phases such as sPmP, sPn, sPL and teleseismic pP,sP are very sensitive to depth, and be readily modeled to determine depth with accuracy of 2km. With reference 1D velocity structure resolved from receiver functions and seismic refraction studies, local/regional depth phases such as sPmP, sPn and sPL are identified by comparing observed waveform with synthetic seismograms by generalized ray theory and reflectivity methods. For teleseismic depth phases well observed for M5.5 and stronger events, we developed an algorithm in inverting both depth and focal mechanism from P and SH waveforms. Also we employed the Cut and Paste (CAP) method developed by Zhao and Helmberger in modeling mechanism and depth with local waveforms, which constrains depth by fitting Pnl waveforms and the relative weight between surface wave and Pnl. After modeling all the depth phases for hundreds of events , we find that most of the M4 earthquakes occur between 2-18km depth, with aftershocks depth ranging 4-12km in the southern half of Longmenshan fault while aftershocks in the northern half featuring large depth range up to 18km. Therefore seismogenic zone in the northern segment is deeper as compared to the southern segment. All the aftershocks occur in upper crust, given that the Moho is deeper than 40km, or even 60km west of the

  4. Intraplate triggered earthquakes: Observations and interpretation

    USGS Publications Warehouse

    Hough, S.E.; Seeber, L.; Armbruster, J.G.

    2003-01-01

    We present evidence that at least two of the three 1811-1812 New Madrid, central United States, mainshocks and the 1886 Charleston, South Carolina, earthquake triggered earthquakes at regional distances. In addition to previously published evidence for triggered earthquakes in the northern Kentucky/southern Ohio region in 1812, we present evidence suggesting that triggered events might have occurred in the Wabash Valley, to the south of the New Madrid Seismic Zone, and near Charleston, South Carolina. We also discuss evidence that earthquakes might have been triggered in northern Kentucky within seconds of the passage of surface waves from the 23 January 1812 New Madrid mainshock. After the 1886 Charleston earthquake, accounts suggest that triggered events occurred near Moodus, Connecticut, and in southern Indiana. Notwithstanding the uncertainty associated with analysis of historical accounts, there is evidence that at least three out of the four known Mw 7 earthquakes in the central and eastern United States seem to have triggered earthquakes at distances beyond the typically assumed aftershock zone of 1-2 mainshock fault lengths. We explore the possibility that remotely triggered earthquakes might be common in low-strain-rate regions. We suggest that in a low-strain-rate environment, permanent, nonelastic deformation might play a more important role in stress accumulation than it does in interplate crust. Using a simple model incorporating elastic and anelastic strain release, we show that, for realistic parameter values, faults in intraplate crust remain close to their failure stress for a longer part of the earthquake cycle than do faults in high-strain-rate regions. Our results further suggest that remotely triggered earthquakes occur preferentially in regions of recent and/or future seismic activity, which suggests that faults are at a critical stress state in only some areas. Remotely triggered earthquakes may thus serve as beacons that identify regions of

  5. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  6. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  7. Heterogeneous slip distribution on faults responsible for large earthquakes: characterization and implications for tsunami modelling

    NASA Astrophysics Data System (ADS)

    Baglione, Enrico; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano

    2017-04-01

    The fact that ruptures on the generating faults of large earthquakes are strongly heterogeneous has been demonstrated over the last few decades by a large number of studies. The effort to retrieve reliable finite-fault models (FFMs) for large earthquakes occurred worldwide, mainly by means of the inversion of different kinds of geophysical data, has been accompanied in the last years by the systematic collection and format homogenisation of the published/proposed FFMs for different earthquakes into specifically conceived databases, such as SRCMOD. The main aim of this study is to explore characteristic patterns of the slip distribution of large earthquakes, by using a subset of the FFMs contained in SRCMOD, covering events with moment magnitude equal or larger than 6 and occurred worldwide over the last 25 years. We focus on those FFMs that exhibit a single and clear region of high slip (i.e. a single asperity), which is found to represent the majority of the events. For these FFMs, it sounds reasonable to best-fit the slip model by means of a 2D Gaussian distributions. Two different methods are used (least-square and highest-similarity) and correspondingly two "best-fit" indexes are introduced. As a result, two distinct 2D Gaussian distributions for each FFM are obtained. To quantify how well these distributions are able to mimic the original slip heterogeneity, we calculate and compare the vertical displacements at the Earth surface in the near field induced by the original FFM slip, by an equivalent uniform-slip model, by a depth-dependent slip model, and by the two "best" Gaussian slip models. The coseismic vertical surface displacement is used as the metric for comparison. Results show that, on average, the best results are the ones obtained with 2D Gaussian distributions based on similarity index fitting. Finally, we restrict our attention to those single-asperity FFMs associated to earthquakes which generated tsunamis. We choose few events for which tsunami

  8. Earthquake source properties from pseudotachylite

    USGS Publications Warehouse

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow

  9. Local earthquake tomography of Scotland

    NASA Astrophysics Data System (ADS)

    Luckett, Richard; Baptie, Brian

    2015-03-01

    Scotland is a relatively aseismic region for the use of local earthquake tomography, but 40 yr of earthquakes recorded by a good and growing network make it possible. A careful selection is made from the earthquakes located by the British Geological Survey (BGS) over the last four decades to provide a data set maximising arrival time accuracy and ray path coverage of Scotland. A large number of 1-D velocity models with different layer geometries are considered and differentiated by employing quarry blasts as ground-truth events. Then, SIMULPS14 is used to produce a robust 3-D tomographic P-wave velocity model for Scotland. In areas of high resolution the model shows good agreement with previously published interpretations of seismic refraction and reflection experiments. However, the model shows relatively little lateral variation in seismic velocity except at shallow depths, where sedimentary basins such as the Midland Valley are apparent. At greater depths, higher velocities in the northwest parts of the model suggest that the thickness of crust increases towards the south and east. This observation is also in agreement with previous studies. Quarry blasts used as ground truth events and relocated with the preferred 3-D model are shown to be markedly more accurate than when located with the existing BGS 1-D velocity model.

  10. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    NASA Astrophysics Data System (ADS)

    Gunardi, Setiawan, Ezra Putranda

    2015-12-01

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake's magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, `coupon only at risk' bond, and `principal and coupon at risk' bond. Relationship between price of the catastrophe bond and CIR model's parameter, GEV's parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.

  11. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  12. The Road to Total Earthquake Safety

    NASA Astrophysics Data System (ADS)

    Frohlich, Cliff

    Cinna Lomnitz is possibly the most distinguished earthquake seismologist in all of Central and South America. Among many other credentials, Lomnitz has personally experienced the shaking and devastation that accompanied no fewer than five major earthquakes—Chile, 1939; Kern County, California, 1952; Chile, 1960; Caracas,Venezuela, 1967; and Mexico City, 1985. Thus he clearly has much to teach someone like myself, who has never even actually felt a real earthquake.What is this slim book? The Road to Total Earthquake Safety summarizes Lomnitz's May 1999 presentation at the Seventh Mallet-Milne Lecture, sponsored by the Society for Earthquake and Civil Engineering Dynamics. His arguments are motivated by the damage that occurred in three earthquakes—Mexico City, 1985; Loma Prieta, California, 1989; and Kobe, Japan, 1995. All three quakes occurred in regions where earthquakes are common. Yet in all three some of the worst damage occurred in structures located a significant distance from the epicenter and engineered specifically to resist earthquakes. Some of the damage also indicated that the structures failed because they had experienced considerable rotational or twisting motion. Clearly, Lomnitz argues, there must be fundamental flaws in the usually accepted models explaining how earthquakes generate strong motions, and how we should design resistant structures.

  13. Analysis the Source model of the 2009 Mw 7.6 Padang Earthquake in Sumatra Region using continuous GPS data

    NASA Astrophysics Data System (ADS)

    Amertha Sanjiwani, I. D. M.; En, C. K.; Anjasmara, I. M.

    2017-12-01

    A seismic gap on the interface along the Sunda subduction zone has been proposed among the 2000, 2004, 2005 and 2007 great earthquakes. This seismic gap therefore plays an important role in the earthquake risk on the Sunda trench. The Mw 7.6 Padang earthquake, an intraslab event, was occurred on September 30, 2009 located at ± 250 km east of the Sunda trench, close to the seismic gap on the interface. To understand the interaction between the seismic gap and the Padang earthquake, twelves continuous GPS data from SUGAR are adopted in this study to estimate the source model of this event. The daily GPS coordinates one month before and after the earthquake were calculated by the GAMIT software. The coseismic displacements were evaluated based on the analysis of coordinate time series in Padang region. This geodetic network provides a rather good spatial coverage for examining the seismic source along the Padang region in detail. The general pattern of coseismic horizontal displacements is moving toward epicenter and also the trench. The coseismic vertical displacement pattern is uplift. The highest coseismic displacement derived from the MSAI station are 35.0 mm for horizontal component toward S32.1°W and 21.7 mm for vertical component. The second largest one derived from the LNNG station are 26.6 mm for horizontal component toward N68.6°W and 3.4 mm for vertical component. Next, we will use uniform stress drop inversion to invert the coseismic displacement field for estimating the source model. Then the relationship between the seismic gap on the interface and the intraslab Padang earthquake will be discussed in the next step. Keyword: seismic gap, Padang earthquake, coseismic displacement.

  14. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    NASA Astrophysics Data System (ADS)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  15. Scale-invariant structure of energy fluctuations in real earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Chang, Zhe; Wang, Huanyu; Lu, Hong

    2017-11-01

    Earthquakes are obviously complex phenomena associated with complicated spatiotemporal correlations, and they are generally characterized by two power laws: the Gutenberg-Richter (GR) and the Omori-Utsu laws. However, an important challenge has been to explain two apparently contrasting features: the GR and Omori-Utsu laws are scale-invariant and unaffected by energy or time scales, whereas earthquakes occasionally exhibit a characteristic energy or time scale, such as with asperity events. In this paper, three high-quality datasets on earthquakes were used to calculate the earthquake energy fluctuations at various spatiotemporal scales, and the results reveal the correlations between seismic events regardless of their critical or characteristic features. The probability density functions (PDFs) of the fluctuations exhibit evidence of another scaling that behaves as a q-Gaussian rather than random process. The scaling behaviors are observed for scales spanning three orders of magnitude. Considering the spatial heterogeneities in a real earthquake fault, we propose an inhomogeneous Olami-Feder-Christensen (OFC) model to describe the statistical properties of real earthquakes. The numerical simulations show that the inhomogeneous OFC model shares the same statistical properties with real earthquakes.

  16. Measuring the effectiveness of earthquake forecasting in insurance strategies

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Muir-Wood, R.

    2009-04-01

    Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.

  17. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability

  18. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  19. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  20. Fault failure with moderate earthquakes

    USGS Publications Warehouse

    Johnston, M.J.S.; Linde, A.T.; Gladwin, M.T.; Borcherdt, R.D.

    1987-01-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake (ML = 6.7, ?? = 51 km), the August 4, 1985, Kettleman Hills earthquake (ML = 5.5, ?? = 34 km), the April 1984 Morgan Hill earthquake (ML = 6.1, ?? = 55 km), the November 1984 Round Valley earthquake (ML = 5.8, ?? = 54 km), the January 14, 1978, Izu, Japan earthquake (ML = 7.0, ?? = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10-8), with borehole dilatometers (resolution 10-10) and a 3-component borehole strainmeter (resolution 10-9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure. ?? 1987.

  1. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  2. Multicultural Challenges in Educational Policies within a Non-Conservative Scenario: The Case of the Emerging Reforms in Higher Education in Brazil

    ERIC Educational Resources Information Center

    Canen, Ana

    2005-01-01

    The article discusses the extent to which multiculturalism has had an impact in the emerging reforms in higher education in Brazil, against the backdrop of the rise of a new non-Conservative, Labour-oriented government whose political agenda is marked by a discursive stand against conservatism, neo-liberalism and neocolonialism. Building on a…

  3. Slip reactivation model for the 2011 Mw9 Tohoku earthquake: Dynamic rupture, sea floor displacements and tsunami simulations.

    NASA Astrophysics Data System (ADS)

    Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.

    2014-12-01

    The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of

  4. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  5. New insights on active fault geometries in the Mentawai region of Sumatra, Indonesia, from broadband waveform modeling of earthquake source parameters

    NASA Astrophysics Data System (ADS)

    WANG, X.; Wei, S.; Bradley, K. E.

    2017-12-01

    Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike

  6. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  7. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) Model - An Unified Concept for Earthquake Precursors Validation

    NASA Technical Reports Server (NTRS)

    Pulinets, S.; Ouzounov, D.

    2010-01-01

    The paper presents a conception of complex multidisciplinary approach to the problem of clarification the nature of short-term earthquake precursors observed in atmosphere, atmospheric electricity and in ionosphere and magnetosphere. Our approach is based on the most fundamental principles of tectonics giving understanding that earthquake is an ultimate result of relative movement of tectonic plates and blocks of different sizes. Different kind of gases: methane, helium, hydrogen, and carbon dioxide leaking from the crust can serve as carrier gases for radon including underwater seismically active faults. Radon action on atmospheric gases is similar to the cosmic rays effects in upper layers of atmosphere: it is the air ionization and formation by ions the nucleus of water condensation. Condensation of water vapor is accompanied by the latent heat exhalation is the main cause for observing atmospheric thermal anomalies. Formation of large ion clusters changes the conductivity of boundary layer of atmosphere and parameters of the global electric circuit over the active tectonic faults. Variations of atmospheric electricity are the main source of ionospheric anomalies over seismically active areas. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model can explain most of these events as a synergy between different ground surface, atmosphere and ionosphere processes and anomalous variations which are usually named as short-term earthquake precursors. A newly developed approach of Interdisciplinary Space-Terrestrial Framework (ISTF) can provide also a verification of these precursory processes in seismically active regions. The main outcome of this paper is the unified concept for systematic validation of different types of earthquake precursors united by physical basis in one common theory.

  8. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  9. Improving Models for Coseismic And Postseismic Deformation from the 2002 Denali, Alaska Earthquake

    NASA Astrophysics Data System (ADS)

    Harper, H.; Freymueller, J. T.

    2016-12-01

    Given the multi-decadal temporal scale of postseismic deformation, predictions of previous models for postseismic deformation resulting from the 2002 Denali Fault earthquake (M 7.9) do not agree with longer-term observations. In revising the past postseismic models with what is now over a decade of data, the first step is revisiting coseismic displacements and slip distribution of the earthquake. Advances in processing allow us to better constrain coseismic displacement estimates, which affect slip distribution predictions in modeling. Additionally, an updated slip model structure from a homogeneous model to a layered model rectifies previous inconsistencies between coseismic and postseismic models. Previous studies have shown that two primary processes contribute to postseismic deformation: afterslip, which decays with a short time constant; and viscoelastic relaxation, which decays with a longer time constant. We fit continuous postseismic GPS time series with three different relaxation models: 1) logarithmic decay + exponential decay, 2) log + exp + exp, and 3) log + log + exp. A grid search is used to minimize total model WRSS, and we find optimal relaxation times of: 1) 0.125 years (log) and 21.67 years (exp); 2) 0.14 years (log), 0.68 years (exp), and 28.33 years (exp); 3) 0.055 years (log), 14.44 years (log), and 22.22 years (exp). While there is not a one-to-one correspondence between a particular decay constant and a mechanism, the optimization of these constants allows us to model the future timeseries and constrain the contribution of different postseismic processes.

  10. 23 October 2011 (Mw=7.2) Van Earthquake (Turkey): Revised Coseismic and Postseismic Models from New GPS Observations

    NASA Astrophysics Data System (ADS)

    Dogan, U.; Demir, D. O.; Cakir, Z.; Ergintav, S.; Cetin, S.; Ozdemir, A.; Reilinger, R. E.

    2017-12-01

    The 23 October 2011, Mw=7.2 Van Earthquake occurred in eastern Turkey on a thrust fault trending NE-SW and dipping to the north. We use GPS time series from the survey and continuous stations to determine coseismic deformation and to identify spatial and temporal changes in the near and far field due to postseismic processes (2011-2017). The coseismic deformation in the near field is derived from GPS data collected at 25 cadastral GPS survey sites. The coseismic horizontal displacements reach nearly 50 cm close to the surface trace of the fault that ruptured at depth during the earthquake. The density and distribution of the GPS sites allow us to better constrain the extent of the coseismic rupture using elastic dislocations on triangular faults embedded in a homogeneous, elastic half space. Modeling studies suggest that the coseismic rupture stopped west of the Erçek Lake before veering to the north. Estimated seismic moment is in good agreement with the seismologically and geodetically estimated seismic moment, estimated from the finite-fault model. Our preferred coseismic model consists of a simple elliptical slip patch centered at around 8 km depth with a maximum slip of about 2.5 m, consistent with the previous estimates based on InSAR measurements. The postseismic deformation field is derived from far field continuous GPS observations (10.2011 - 11.2017) and near field GPS campaigns (10.2011 - 09.2015). The postseismic time-series are fit better with a logarithmic than an exponential function, suggesting that the postseismic deformation is due to afterslip. Then, we modified our published postseismic model, using the coseismic model and data sets, extended until the end of 2017. The results show that during 6 years following the earthquake, after slip of up to 65 cm occurred at relatively shallow (< 10 km) depths, mostly above the deep coseismic slip that reaches depths > 15 km. New interpretations of the shallow afterslip, also, adds further evidence that

  11. Satellite Geodetic Constraints On Earthquake Processes: Implications of the 1999 Turkish Earthquakes for Fault Mechanics and Seismic Hazards on the San Andreas Fault

    NASA Technical Reports Server (NTRS)

    Reilinger, Robert

    2005-01-01

    Our principal activities during the initial phase of this project include: 1) Continued monitoring of postseismic deformation for the 1999 Izmit and Duzce, Turkey earthquakes from repeated GPS survey measurements and expansion of the Marmara Continuous GPS Network (MAGNET), 2) Establishing three North Anatolian fault crossing profiles (10 sitedprofile) at locations that experienced major surface-fault earthquakes at different times in the past to examine strain accumulation as a function of time in the earthquake cycle (2004), 3) Repeat observations of selected sites in the fault-crossing profiles (2005), 4) Repeat surveys of the Marmara GPS network to continue to monitor postseismic deformation, 5) Refining block models for the Marmara Sea seismic gap area to better understand earthquake hazards in the Greater Istanbul area, 6) Continuing development of models for afterslip and distributed viscoelastic deformation for the earthquake cycle. We are keeping close contact with MIT colleagues (Brad Hager, and Eric Hetland) who are developing models for S. California and for the earthquake cycle in general (Hetland, 2006). In addition, our Turkish partners at the Marmara Research Center have undertaken repeat, micro-gravity measurements at the MAGNET sites and have provided us estimates of gravity change during the period 2003 - 2005.

  12. Normal forms for Hopf-Zero singularities with nonconservative nonlinear part

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh; Sanders, Jan A.

    In this paper we are concerned with the simplest normal form computation of the systems x˙=2xf(x,y2+z2), y˙=z+yf(x,y2+z2), z˙=-y+zf(x,y2+z2), where f is a formal function with real coefficients and without any constant term. These are the classical normal forms of a larger family of systems with Hopf-Zero singularity. Indeed, these are defined such that this family would be a Lie subalgebra for the space of all classical normal form vector fields with Hopf-Zero singularity. The simplest normal forms and simplest orbital normal forms of this family with nonzero quadratic part are computed. We also obtain the simplest parametric normal form of any non-degenerate perturbation of this family within the Lie subalgebra. The symmetry group of the simplest normal forms is also discussed. This is a part of our results in decomposing the normal forms of Hopf-Zero singular systems into systems with a first integral and nonconservative systems.

  13. Slip Distribution of the 2011 Tohoku-oki Earthquake obtained by Geodetic and Tsunami Data and with a 3-D Finite Element Model

    NASA Astrophysics Data System (ADS)

    Romano, F.; Trasatti, E.; Lorito, S.; Ito, Y.; Piatanesi, A.; Lanucara, P.; Hirata, K.; D'Agostino, N.; Cocco, M.

    2012-12-01

    The rupture process of the Great 2011 Tohoku-oki earthquake has been particularly well studied by using an unprecedented collection of geophysical data. There is a general agreement among the different source models obtained by modeling seismological, geodetic and tsunami data. A slip patch of nearly 40÷50 meters has been imaged and located around and up-dip from the hypocenter by most of published models, while some differences exist in the slip pattern retrieved at shallow depths near the trench, likely due to the different resolving power of distinct data sets and to the adopted fault geometry. It is well known that the modeling of great subduction earthquakes requires the use of 3-D structural models in order to properly account for the effects of topography, bathymetry and the geometrical variations of the plate interface as well as for the effects of elastic contrasts between the subducting plate and the continental lithosphere. In this study we build a 3-D Finite Element (FE) model of the Tohoku-oki area in order to infer the slip distribution of the 2011 earthquake by performing a joint inversion of geodetic (GPS and seafloor observations) and tsunami (ocean bottom pressure sensors, DART and GPS buoys) data. The FE model is used to compute the geodetic and tsunami Green's functions. In order to understand how geometrical and elastic heterogeneities control the inferred slip distribution of the Tohoku-oki earthquake, we compare the slip patterns obtained using both homogeneous and heterogeneous structural models. The goal of this study is to better constrain the slip distribution and the maximum slip amplitudes. In particular, we aim to focus on the rupture process in the shallower part of the fault plane and near the trench, which is crucial to model the tsunami data and to assess the tsunamigenic potential of earthquakes in this region.

  14. Earthquake Drill using the Earthquake Early Warning System at an Elementary School

    NASA Astrophysics Data System (ADS)

    Oki, Satoko; Yazaki, Yoshiaki; Koketsu, Kazuki

    2010-05-01

    Japan frequently suffers from many kinds of disasters such as earthquakes, typhoons, floods, volcanic eruptions, and landslides. On average, we lose about 120 people a year due to natural hazards in this decade. Above all, earthquakes are noteworthy, since it may kill thousands of people in a moment like in Kobe in 1995. People know that we may have "a big one" some day as long as we live on this land and that what to do; retrofit houses, appliance heavy furniture to walls, add latches to kitchen cabinets, and prepare emergency packs. Yet most of them do not take the action, and result in the loss of many lives. It is only the victims that learn something from the earthquake, and it has never become the lore of the nations. One of the most essential ways to reduce the damage is to educate the general public to be able to make the sound decision on what to do at the moment when an earthquake hits. This will require the knowledge of the backgrounds of the on-going phenomenon. The Ministry of Education, Culture, Sports, Science and Technology (MEXT), therefore, offered for public subscription to choose several model areas to adopt scientific education to the local elementary schools. This presentation is the report of a year and half courses that we had at the model elementary school in Tokyo Metropolitan Area. The tectonic setting of this area is very complicated; there are the Pacific and Philippine Sea plates subducting beneath the North America and the Eurasia plates. The subduction of the Philippine Sea plate causes mega-thrust earthquakes such as the 1923 Kanto earthquake (M 7.9) making 105,000 fatalities. A magnitude 7 or greater earthquake beneath this area is recently evaluated to occur with a probability of 70 % in 30 years. This is of immediate concern for the devastating loss of life and property because the Tokyo urban region now has a population of 42 million and is the center of approximately 40 % of the nation's activities, which may cause great global

  15. On near-source earthquake triggering

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2009-01-01

    When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.

  16. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  17. Earthquake outlook for the San Francisco Bay region 2014–2043

    USGS Publications Warehouse

    Aagaard, Brad T.; Blair, James Luke; Boatwright, John; Garcia, Susan H.; Harris, Ruth A.; Michael, Andrew J.; Schwartz, David P.; DiLeo, Jeanne S.; Jacques, Kate; Donlin, Carolyn

    2016-06-13

    Using information from recent earthquakes, improved mapping of active faults, and a new model for estimating earthquake probabilities, the 2014 Working Group on California Earthquake Probabilities updated the 30-year earthquake forecast for California. They concluded that there is a 72 percent probability (or likelihood) of at least one earthquake of magnitude 6.7 or greater striking somewhere in the San Francisco Bay region before 2043. Earthquakes this large are capable of causing widespread damage; therefore, communities in the region should take simple steps to help reduce injuries, damage, and disruption, as well as accelerate recovery from these earthquakes.

  18. Slip model of the 2015 Mw 7.8 Gorkha (Nepal) earthquake from inversions of ALOS-2 and GPS data

    NASA Astrophysics Data System (ADS)

    Wang, Kang; Fialko, Yuri

    2015-09-01

    We use surface deformation measurements including Interferometric Synthetic Aperture Radar data acquired by the ALOS-2 mission of the Japanese Aerospace Exploration Agency and Global Positioning System (GPS) data to invert for the fault geometry and coseismic slip distribution of the 2015 Mw 7.8 Gorkha earthquake in Nepal. Assuming that the ruptured fault connects to the surface trace of the Main Frontal Thrust (MFT) fault between 84.34°E and 86.19°E, the best fitting model suggests a dip angle of 7°. The moment calculated from the slip model is 6.08 × 1020 Nm, corresponding to the moment magnitude of 7.79. The rupture of the 2015 Gorkha earthquake was dominated by thrust motion that was primarily concentrated in a 150 km long zone 50 to 100 km northward from the surface trace of the Main Frontal Thrust (MFT), with maximum slip of ˜ 5.8 m at a depth of ˜8 km. Data thus indicate that the 2015 Gorkha earthquake ruptured a deep part of the seismogenic zone, in contrast to the 1934 Bihar-Nepal earthquake, which had ruptured a shallow part of the adjacent fault segment to the east.

  19. Slip Model of the 2015 Mw 7.8 Gorkha (Nepal) Earthquake from Inversions of ALOS-2 and GPS Data

    NASA Astrophysics Data System (ADS)

    Wang, K.; Fialko, Y. A.

    2015-12-01

    We use surface deformation measurements including Interferometric Synthetic Aperture Radar (InSAR) data acquired by the ALOS-2 mission of the Japanese Aerospace Exploration Agency (JAXA) and Global Positioning System (GPS) data to invert for the fault geometry and coseismic slip distribution of the 2015 Mw 7.8 Gorkha earthquake in Nepal. Assuming that the ruptured fault connects to the surface trace of the of Main Frontal Thrust fault (MFT) between 84.34E and 86.19E, the best-fitting model suggests a dip angle of 7 degrees. The moment calculated from the slip model is 6.17*1020 Nm, corresponding to the moment magnitude of 7.79. The rupture of the 2015 Gorkha earthquake was dominated by thrust motion that was primarily concentrated in a 150-km long zone 50 to 100 km northward from the surface trace of the Main Frontal Thrust (MFT), with maximum slip of ~6 m at a depth of ~ 8 km. Data thus indicate that the 2015 Gorkha earthquake ruptured a deep part of the seismogenic zone, in contrast to the 1934 Bihar-Nepal earthquake, which had ruptured a shallow part of the adjacent fault segment to the East.

  20. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  1. Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.

    2015-12-01

    Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada

  2. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain

  3. Coseismic and Afterslip Model Related to 25 April 2015, Mw7.8 Gorkha, Nepal Earthquake and its Potential Future Risk Regions

    NASA Astrophysics Data System (ADS)

    Wang, S.; Xu, C.; Jiang, G.

    2016-12-01

    Evidences from geologic, geophysical and geomorphic prove that 2015 Mw 7.8 Gorkha(Nepal) earthquake happened on the two ramp-flats fault structure of Main Himalayan Thrust(MHT). We approximated this more realistic fault model by a smooth curved fault surface, which was derived by the method of hybrid iterative inversion algorithm(HIIA) with additional constraints from coseismic geodetic data. Then the coseismic slip distribution of 2015 Gorkha earthquake was imaged based on this curved variably triangular sized fault model. The inverted maximum thrust and right-lateral slip components are 6 and 1.5 m, respectively, with the maximum slip magnitude 6.2 m located at a depth of 15 km. The released seismic moment derived from our best slip model is 8.58×1020 Nm, equivalent to a moment magnitude of Mw 7.89. We find two interesting tongue-shape slip areas, the maximum slip is about 1.5 m, along the up-dip of fault plane, which tappers off at the depth of 7 km, the up-dip propagation of ruptures may be impeded by the complicated geometry structures on the MHT interface. Coulomb Failure Stress(CFS), triggered by our optimal slip model, indicating a potential shallower rupture in the future. Considering historical earthquakes distribution and the calculated strain and strain gradient before this earthquake, earthquakes are expected to occur in the northwest areas of the epicenter. The spatio-temporal afterslip model over the first 180 days following the Mw 7.8 main shock was infered from the post-seismic GPS time series. One significant afterslip region can be observed in the downdip of the regions that ruptured by coseismic slip. Another afterslip region arresting our attention, is located around 40 km depth, with about 180 mm slip amplitude, but tappers off at the depth of 50 km. What's more, afterslip mainly occurs within 100 days after the 2015 Gorkha earthquake. Under the assumption of rigidity modulus u = 30 GPa, the released seismic moment by afterslip corresponding

  4. Automatic Earthquake Detection by Active Learning

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  5. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  6. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  7. Living on an Active Earth: Perspectives on Earthquake Science

    NASA Astrophysics Data System (ADS)

    Lay, Thorne

    2004-02-01

    The annualized long-term loss due to earthquakes in the United States is now estimated at $4.4 billion per year. A repeat of the 1923 Kanto earthquake, near Tokyo, could cause direct losses of $2-3 trillion. With such grim numbers, which are guaranteed to make you take its work seriously, the NRC Committee on the Science of Earthquakes begins its overview of the emerging multidisciplinary field of earthquake science. An up-to-date and forward-looking survey of scientific investigation of earthquake phenomena and engineering response to associated hazards is presented at a suitable level for a general educated audience. Perspectives from the fields of seismology, geodesy, neo-tectonics, paleo-seismology, rock mechanics, earthquake engineering, and computer modeling of complex dynamic systems are integrated into a balanced definition of earthquake science that has never before been adequately articulated.

  8. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a

  9. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    NASA Astrophysics Data System (ADS)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  10. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    NASA Astrophysics Data System (ADS)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  11. Modeling of a historical earthquake in Erzincan, Turkey (Ms 7.8, in 1939) using regional seismological information obtained from a recent event

    NASA Astrophysics Data System (ADS)

    Karimzadeh, Shaghayegh; Askan, Aysegul

    2018-04-01

    Located within a basin structure, at the conjunction of North East Anatolian, North Anatolian and Ovacik Faults, Erzincan city center (Turkey) is one of the most hazardous regions in the world. Combination of the seismotectonic and geological settings of the region has resulted in series of significant seismic activities including the 1939 (Ms 7.8) as well as the 1992 (Mw = 6.6) earthquakes. The devastative 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms. Thus, a limited number of studies exist on that earthquake. However, the 1992 event, despite the sparse local network at that time, has been studied extensively. This study aims to simulate the 1939 Erzincan earthquake using available regional seismic and geological parameters. Despite several uncertainties involved, such an effort to quantitatively model the 1939 earthquake is promising, given the historical reports of extensive damage and fatalities in the area. The results of this study are expressed in terms of anticipated acceleration time histories at certain locations, spatial distribution of selected ground motion parameters and felt intensity maps in the region. Simulated motions are first compared against empirical ground motion prediction equations derived with both local and global datasets. Next, anticipated intensity maps of the 1939 earthquake are obtained using local correlations between peak ground motion parameters and felt intensity values. Comparisons of the estimated intensity distributions with the corresponding observed intensities indicate a reasonable modeling of the 1939 earthquake.

  12. Wave propagation modelling of induced earthquakes at the Groningen gas production site

    NASA Astrophysics Data System (ADS)

    Paap, Bob; Kraaijpoel, Dirk; Bakker, Marcel; Gharti, Hom Nath

    2018-06-01

    Gas extraction from the Groningen natural gas field, situated in the Netherlands, frequently induces earthquakes in the reservoir that cause damage to buildings and pose a safety hazard and a nuisance to the local population. Due to the dependence of the national heating infrastructure on Groningen gas, the short-term mitigation measures are mostly limited to a combination of spatiotemporal redistribution of gas production and strengthening measures for buildings. All options become more effective with a better understanding of both source processes and seismic wave propagation. Detailed wave propagation simulations improve both the inference of source processes from observed ground motions and the forecast of ground motions as input for hazard studies and seismic network design. The velocity structure at the Groningen site is relatively complex, including both deep high-velocity and shallow low-velocity deposits showing significant thickness variations over relatively small spatial extents. We performed a detailed three-dimensional wave propagation modelling study for an induced earthquake in the Groningen natural gas field using the spectral-element method. We considered an earthquake that nucleated along a normal fault with local magnitude of {{{M}}_{{L}}} = 3. We created a dense mesh with element size varying from 12 to 96 m, and used a source frequency of 7 Hz, such that frequencies generated during the simulation were accurately sampled up to 10 Hz. The velocity/density model is constructed using a three-dimensional geological model of the area, including both deep high-velocity salt deposits overlying the source region and shallow low-velocity sediments present in a deep but narrow tunnel valley. The results show that the three-dimensional density/velocity model in the Groningen area clearly play a large role in the wave propagation and resulting surface ground motions. The 3d structure results in significant lateral variations in site response. The high

  13. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  14. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and

  15. Analysis of post-earthquake landslide activity and geo-environmental effects

    NASA Astrophysics Data System (ADS)

    Tang, Chenxiao; van Westen, Cees; Jetten, Victor

    2014-05-01

    Large earthquakes can cause huge losses to human society, due to ground shaking, fault rupture and due to the high density of co-seismic landslides that can be triggered in mountainous areas. In areas that have been affected by such large earthquakes, the threat of landslides continues also after the earthquake, as the co-seismic landslides may be reactivated by high intensity rainfall events. Earthquakes create Huge amount of landslide materials remain on the slopes, leading to a high frequency of landslides and debris flows after earthquakes which threaten lives and create great difficulties in post-seismic reconstruction in the earthquake-hit regions. Without critical information such as the frequency and magnitude of landslides after a major earthquake, reconstruction planning and hazard mitigation works appear to be difficult. The area hit by Mw 7.9 Wenchuan earthquake in 2008, Sichuan province, China, shows some typical examples of bad reconstruction planning due to lack of information: huge debris flows destroyed several re-constructed settlements. This research aim to analyze the decay in post-seismic landslide activity in areas that have been hit by a major earthquake. The areas hit by the 2008 Wenchuan earthquake will be taken a study area. The study will analyze the factors that control post-earthquake landslide activity through the quantification of the landslide volume changes well as through numerical simulation of their initiation process, to obtain a better understanding of the potential threat of post-earthquake landslide as a basis for mitigation planning. The research will make use of high-resolution stereo satellite images, UAV and Terrestrial Laser Scanning(TLS) to obtain multi-temporal DEM to monitor the change of loose sediments and post-seismic landslide activities. A debris flow initiation model that incorporates the volume of source materials, vegetation re-growth, and intensity-duration of the triggering precipitation, and that evaluates

  16. Dynamic stress changes during earthquake rupture

    USGS Publications Warehouse

    Day, S.M.; Yu, G.; Wald, D.J.

    1998-01-01

    We assess two competing dynamic interpretations that have been proposed for the short slip durations characteristic of kinematic earthquake models derived by inversion of earthquake waveform and geodetic data. The first interpretation would require a fault constitutive relationship in which rapid dynamic restrengthening of the fault surface occurs after passage of the rupture front, a hypothesized mechanical behavior that has been referred to as "self-healing." The second interpretation would require sufficient spatial heterogeneity of stress drop to permit rapid equilibration of elastic stresses with the residual dynamic friction level, a condition we refer to as "geometrical constraint." These interpretations imply contrasting predictions for the time dependence of the fault-plane shear stresses. We compare these predictions with dynamic shear stress changes for the 1992 Landers (M 7.3), 1994 Northridge (M 6.7), and 1995 Kobe (M 6.9) earthquakes. Stress changes are computed from kinematic slip models of these earthquakes, using a finite-difference method. For each event, static stress drop is highly variable spatially, with high stress-drop patches embedded in a background of low, and largely negative, stress drop. The time histories of stress change show predominantly monotonic stress change after passage of the rupture front, settling to a residual level, without significant evidence for dynamic restrengthening. The stress change at the rupture front is usually gradual rather than abrupt, probably reflecting the limited resolution inherent in the underlying kinematic inversions. On the basis of this analysis, as well as recent similar results obtained independently for the Kobe and Morgan Hill earthquakes, we conclude that, at the present time, the self-healing hypothesis is unnecessary to explain earthquake kinematics.

  17. Numerical Modeling on Co-seismic Influence of Wenchuan 8.0 Earthquake in Sichuan-Yunnan Area, China

    NASA Astrophysics Data System (ADS)

    Chen, L.; Li, H.; Lu, Y.; Li, Y.; Ye, J.

    2009-12-01

    In this paper, a three dimensional finite element model for active faults which are handled by contact friction elements in Sichuan-Yunnan area is built. Applying the boundary conditions determined through GPS data, a numerical simulations on spatial patterns of stress-strain changes induced by Wenchuan Ms8.0 earthquake are performed. Some primary results are: a) the co-seismic displacements in Longmen shan fault zone by the initial cracking event benefit not only the NE-direction expanding of subsequent fracture process but also the focal mechanism conversions from thrust to right lateral strike for the most of following sub-cracking events. b) tectonic movements induced by the Wenchuan earthquake are stronger in the upper wall of Longmen shan fault belt than in the lower wall and are influenced remarkably by the northeast boundary faults of the rhombic block. c) the extrema of stress changes induced by the main shock are 106Pa and its spatial size is about 400km long and 100km wide. The total stress level is reduced in the most regions in Longmen shan fault zone, whereas stress change is rather weak in its southwest segment and possibly result in fewer aftershocks in there. d) effects induced by the Wenchuan earthquake to the major active faults are obviously different from each other. e) triggering effect of the Wenchuan earthquake to the following Huili 6.1 earthquake is very weak.

  18. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  19. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  20. Solomon Islands 2007 Tsunami Near-Field Modeling and Source Earthquake Deformation

    NASA Astrophysics Data System (ADS)

    Uslu, B.; Wei, Y.; Fritz, H.; Titov, V.; Chamberlin, C.

    2008-12-01

    The earthquake of 1 April 2007 left behind momentous footages of crust rupture and tsunami impact along the coastline of Solomon Islands (Fritz and Kalligeris, 2008; Taylor et al., 2008; McAdoo et al., 2008; PARI, 2008), while the undisturbed tsunami signals were also recorded at nearby deep-ocean tsunameters and coastal tide stations. These multi-dimensional measurements provide valuable datasets to tackle the challenging aspects at the tsunami source directly by inversion from tsunameter records in real time (available in a time frame of minutes), and its relationship with the seismic source derived either from the seismometer records (available in a time frame of hours or days) or from the crust rupture measurements (available in a time frame of months or years). The tsunami measurements in the near field, including the complex vertical crust motion and tsunami runup, are particularly critical to help interpreting the tsunami source. This study develops high-resolution inundation models for the Solomon Islands to compute the near-field tsunami impact. Using these models, this research compares the tsunameter-derived tsunami source with the seismic-derived earthquake sources from comprehensive perceptions, including vertical uplift and subsidence, tsunami runup heights and their distributional pattern among the islands, deep-ocean tsunameter measurements, and near- and far-field tide gauge records. The present study stresses the significance of the tsunami magnitude, source location, bathymetry and topography in accurately modeling the generation, propagation and inundation of the tsunami waves. This study highlights the accuracy and efficiency of the tsunameter-derived tsunami source in modeling the near-field tsunami impact. As the high- resolution models developed in this study will become part of NOAA's tsunami forecast system, these results also suggest expanding the system for potential applications in tsunami hazard assessment, search and rescue operations

  1. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  2. 2010 Chile Earthquake Aftershock Response

    NASA Astrophysics Data System (ADS)

    Barientos, Sergio

    2010-05-01

    1906? Since the number of M>7.0 aftershocks has been low, does the distribution of large-magnitude aftershocks differ from previous events of this size? What is the origin of the extensional-type aftershocks at shallow depths within the upper plate? The international seismological community (France, Germany, U.K., U.S.A.) in collaboration with the Chilean seismological community responded with a total of 140 portable seismic stations to deploy in order to record aftershocks. This combined with the Chilean permanent seismic network, in the area results in 180 stations now in operation recording continuous at 100 cps. The seismic equipment is a mix of accelerometers, short -period and broadband seismic sensors deployed along the entire length of the aftershock zone that will record the aftershock sequence for three to six months. The collected seismic data will be merged and archived to produce an international data set open to the entire seismological community immediately after archiving. Each international group will submit their data as soon as possible in standard (mini seed) format with accompanying meta data to the IRIS DMC where the data will be merged into a combined data set and available to individuals and other data centers. This will be by far the best-recorded aftershock sequence of a large megathrust earthquake. This outstanding international collaboration will provide an open data set for this important earthquake as well as provide a model for future aftershock deployments around the world.

  3. The Rotational and Gravitational Effect of Earthquakes

    NASA Technical Reports Server (NTRS)

    Gross, Richard

    2000-01-01

    The static displacement field generated by an earthquake has the effect of rearranging the Earth's mass distribution and will consequently cause the Earth's rotation and gravitational field to change. Although the coseismic effect of earthquakes on the Earth's rotation and gravitational field have been modeled in the past, no unambiguous observations of this effect have yet been made. However, the Gravity Recovery And Climate Experiment (GRACE) satellite, which is scheduled to be launched in 2001, will measure time variations of the Earth's gravitational field to high degree and order with unprecedented accuracy. In this presentation, the modeled coseismic effect of earthquakes upon the Earth's gravitational field to degree and order 100 will be computed and compared to the expected accuracy of the GRACE measurements. In addition, the modeled second degree changes, corresponding to changes in the Earth's rotation, will be compared to length-of-day and polar motion excitation observations.

  4. Qualitative Investigation of the Earthquake Precuesors Prior to the March 14,2012 Earthquake in Japan

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Shailesh Kumar; Gwal, Ashok Kumar

    Abstract: In this study we have used the Empirical Mode Decomposition (EMD) method in conjunction with the Cross Correlation analysis to analyze ionospheric foF2 parameter Japan earthquake with magnitude M = 6.9. The data are collected from Kokubunji (35.70N, 139.50E) and Yamakawa (31.20N, 130.60E) ionospheric stations. The EMD method was used for removing the geophysical noise from the foF2 data and then to calculate the correlation coefficient between them. It was found that the ionospheric foF2 parameter shows anomalous change few days before the earthquake. The results are in agreement with the theoretical model evidencing ionospheric modification prior to Japan earthquake in a certain area around the epicenter.

  5. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto

  6. Gravitational body forces focus North American intraplate earthquakes

    USGS Publications Warehouse

    Levandowski, William Brower; Zellman, Mark; Briggs, Richard

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard.

  7. Gravitational body forces focus North American intraplate earthquakes

    PubMed Central

    Levandowski, Will; Zellman, Mark; Briggs, Rich

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard. PMID:28211459

  8. Theory of Metastable State Relaxation for Non-Critical Binary Systems with Non-Conserved Order Parameter

    NASA Technical Reports Server (NTRS)

    Izmailov, Alexander; Myerson, Allan S.

    1993-01-01

    A new mathematical ansatz for a solution of the time-dependent Ginzburg-Landau non-linear partial differential equation is developed for non-critical systems such as non-critical binary solutions (solute + solvent) described by the non-conserved scalar order parameter. It is demonstrated that in such systems metastability initiates heterogeneous solute redistribution which results in formation of the non-equilibrium singly-periodic spatial solute structure. It is found how the time-dependent period of this structure evolves in time. In addition, the critical radius r(sub c) for solute embryo of the new solute rich phase together with the metastable state lifetime t(sub c) are determined analytically and analyzed.

  9. Dynamic fault rupture model of the 2008 Iwate-Miyagi Nairiku earthquake, Japan; Role of rupture velocity changes on extreme ground motions

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Dalguer Gudiel, L. A.; Aoi, S.

    2009-12-01

    The Iwate-Miyagi Nairiku earthquake, a reverse earthquake occurred in the southern Iwate prefecture Japan (2008/6/14), produced the largest peak ground acceleration recorded to date (4g) (Aoi et al. 2008), at the West Ichinoseki (IWTH25), KiK-net strong motion station of NIED. This station which is equipped with surface and borehole accelerometers (GL-260), also recorded very high peak accelerations up to 1g at the borehole level, despite being located in a rock site. From comparison of spectrograms of the observed surface and borehole records at IWTH25, Pulido et. al (2008) identified two high frequency (HF) ground motion events located at 4.5s and 6.3s originating at the source, which likely derived in the extreme observed accelerations of 3.9g and 3.5g at IWTH25. In order to understand the generation mechanism of these HF events we performed a dynamic fault rupture model of the Iwate-Miyagi Nairiku earthquake by using the Support Operator Rupture Dynamics (SORD) code, (Ely et al., 2009). SORD solves the elastodynamic equation using a generalized finite difference method that can utilize meshes of arbitrary structure and is capable of handling geometries appropriate to thrust earthquakes. Our spontaneous dynamic rupture model of the Iwate-Miyagi Nairiku earthquake is governed by the simple slip weakening friction law. The dynamic parameters, stress drop, strength excess and critical slip weakening distance are estimated following the procedure described in Pulido and Dalguer (2009) [PD09]. These parameters develop earthquake rupture consistent with the final slip obtained by kinematic source inversion of near source strong ground motion recordings. The dislocation model of this earthquake is characterized by a patch of large slip located ~7 km south of the hypocenter (Suzuki et al. 2009). Our results for the calculation of stress drop follow a similar pattern. Using the rupture times obtained from the dynamic model of the Iwate-Miyagi Nairiku earthquake we

  10. Subduction zone and crustal dynamics of western Washington; a tectonic model for earthquake hazards evaluation

    USGS Publications Warehouse

    Stanley, Dal; Villaseñor, Antonio; Benz, Harley

    1999-01-01

    The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This

  11. Laboratory generated M -6 earthquakes

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  12. Update earthquake risk assessment in Cairo, Egypt

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2017-07-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  13. Surface Deformation Associated with the 1983 Borah Peak Earthquake Measured from Digital Surface Model Differencing

    NASA Astrophysics Data System (ADS)

    Reitman, N. G.; Briggs, R.; Gold, R. D.; DuRoss, C. B.

    2015-12-01

    Post-earthquake, field-based assessments of surface displacement commonly underestimate offsets observed with remote sensing techniques (e.g., InSAR, image cross-correlation) because they fail to capture the total deformation field. Modern earthquakes are readily characterized by comparing pre- and post-event remote sensing data, but historical earthquakes often lack pre-event data. To overcome this challenge, we use historical aerial photographs to derive pre-event digital surface models (DSMs), which we compare to modern, post-event DSMs. Our case study focuses on resolving on- and off-fault deformation along the Lost River fault that accompanied the 1983 M6.9 Borah Peak, Idaho, normal-faulting earthquake. We use 343 aerial images from 1952-1966 and vertical control points selected from National Geodetic Survey benchmarks measured prior to 1983 to construct a pre-event point cloud (average ~ 0.25 pts/m2) and corresponding DSM. The post-event point cloud (average ~ 1 pt/m2) and corresponding DSM are derived from WorldView 1 and 2 scenes processed with NASA's Ames Stereo Pipeline. The point clouds and DSMs are coregistered using vertical control points, an iterative closest point algorithm, and a DSM coregistration algorithm. Preliminary results of differencing the coregistered DSMs reveal a signal spanning the surface rupture that is consistent with tectonic displacement. Ongoing work is focused on quantifying the significance of this signal and error analysis. We expect this technique to yield a more complete understanding of on- and off-fault deformation patterns associated with the Borah Peak earthquake along the Lost River fault and to help improve assessments of surface deformation for other historical ruptures.

  14. Inter-Disciplinary Validation of Pre Earthquake Signals. Case Study for Major Earthquakes in Asia (2004-2010) and for 2011 Tohoku Earthquake

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Hattori, K.; Liu, J.-Y.; Yang. T. Y.; Parrot, M.; Kafatos, M.; Taylor, P.

    2012-01-01

    We carried out multi-sensors observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several physical and environmental parameters, which we found, associated with the earthquake processes: thermal infrared radiation, temperature and concentration of electrons in the ionosphere, radon/ion activities, and air temperature/humidity in the atmosphere. We used satellite and ground observations and interpreted them with the Lithosphere-Atmosphere- Ionosphere Coupling (LAIC) model, one of possible paradigms we study and support. We made two independent continues hind-cast investigations in Taiwan and Japan for total of 102 earthquakes (M>6) occurring from 2004-2011. We analyzed: (1) ionospheric electromagnetic radiation, plasma and energetic electron measurements from DEMETER (2) emitted long-wavelength radiation (OLR) from NOAA/AVHRR and NASA/EOS; (3) radon/ion variations (in situ data); and 4) GPS Total Electron Content (TEC) measurements collected from space and ground based observations. This joint analysis of ground and satellite data has shown that one to six (or more) days prior to the largest earthquakes there were anomalies in all of the analyzed physical observations. For the latest March 11 , 2011 Tohoku earthquake, our analysis shows again the same relationship between several independent observations characterizing the lithosphere /atmosphere coupling. On March 7th we found a rapid increase of emitted infrared radiation observed from satellite data and subsequently an anomaly developed near the epicenter. The GPS/TEC data indicated an increase and variation in electron density reaching a maximum value on March 8. Beginning from this day we confirmed an abnormal TEC variation over the epicenter in the lower ionosphere. These findings revealed the existence of atmospheric and ionospheric phenomena occurring prior to the 2011 Tohoku earthquake, which indicated new evidence of a distinct

  15. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  16. Modal-space reference-model-tracking fuzzy control of earthquake excited structures

    NASA Astrophysics Data System (ADS)

    Park, Kwan-Soon; Ok, Seung-Yong

    2015-01-01

    This paper describes an adaptive modal-space reference-model-tracking fuzzy control technique for the vibration control of earthquake-excited structures. In the proposed approach, the fuzzy logic is introduced to update optimal control force so that the controlled structural response can track the desired response of a reference model. For easy and practical implementation, the reference model is constructed by assigning the target damping ratios to the first few dominant modes in modal space. The numerical simulation results demonstrate that the proposed approach successfully achieves not only the adaptive fault-tolerant control system against partial actuator failures but also the robust performance against the variations of the uncertain system properties by redistributing the feedback control forces to the available actuators.

  17. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  18. Earthquakes drive focused denudation along a tectonically active mountain front

    NASA Astrophysics Data System (ADS)

    Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Zhang, Fei; Wang, Jin; Clark, Marin; Hilton, Robert G.

    2017-08-01

    Earthquakes cause widespread landslides that can increase erosional fluxes observed over years to decades. However, the impact of earthquakes on denudation over the longer timescales relevant to orogenic evolution remains elusive. Here we assess erosion associated with earthquake-triggered landslides in the Longmen Shan range at the eastern margin of the Tibetan Plateau. We use the Mw 7.9 2008 Wenchuan and Mw 6.6 2013 Lushan earthquakes to evaluate how seismicity contributes to the erosional budget from short timescales (annual to decadal, as recorded by sediment fluxes) to long timescales (kyr to Myr, from cosmogenic nuclides and low temperature thermochronology). Over this wide range of timescales, the highest rates of denudation in the Longmen Shan coincide spatially with the region of most intense landsliding during the Wenchuan earthquake. Across sixteen gauged river catchments, sediment flux-derived denudation rates following the Wenchuan earthquake are closely correlated with seismic ground motion and the associated volume of Wenchuan-triggered landslides (r2 > 0.6), and to a lesser extent with the frequency of high intensity runoff events (r2 = 0.36). To assess whether earthquake-induced landsliding can contribute importantly to denudation over longer timescales, we model the total volume of landslides triggered by earthquakes of various magnitudes over multiple earthquake cycles. We combine models that predict the volumes of landslides triggered by earthquakes, calibrated against the Wenchuan and Lushan events, with an earthquake magnitude-frequency distribution. The long-term, landslide-sustained "seismic erosion rate" is similar in magnitude to regional long-term denudation rates (∼0.5-1 mm yr-1). The similar magnitude and spatial coincidence suggest that earthquake-triggered landslides are a primary mechanism of long-term denudation in the frontal Longmen Shan. We propose that the location and intensity of seismogenic faulting can contribute to

  19. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Image and Video Library

    2001-03-30

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul. http://photojournal.jpl.nasa.gov/catalog/PIA00557

  20. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    USGS Publications Warehouse

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault