Sample records for large scale earthquakes

  1. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  2. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  3. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  4. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).

  5. Large-scale displacement following the 2016 Kaikōura earthquake

    NASA Astrophysics Data System (ADS)

    Wang, T.; Peng, D.; Barbot, S.; Wei, S.; Shi, X.

    2017-12-01

    The 2016 Mw 7.9 Kaikōura earthquake occurred near the southern termination of the Hikurangi subduction system, where a transition from subduction to strike-slip motion dominates the pre-seismic strain accumulation. Dense spatial coverage of the GPS measurements and large amount of Interferometric Synthetic Aperture Radar (InSAR) images provide valuable constraints, from the near field to the far field, to study how the slip is distributed among the subduction interface and the overlying fault system before, during and after the earthquake. We extract time-series deformation from the New Zealand continuous GPS network, and SAR images acquired from Japanese ALOS-2 and European Sentinel-1A/B satellites to image the surface deformation related to the 2016 Kaikōura earthquake. Both GPS and InSAR data, which cover the entire New Zealand region, show that the co-seismic and post-seismic deformations are distributed in an extraordinary large area, as far as to the north tip of the North Island. Based on a coseismic slip model derived from seismic and geodetic observations, we calculate the stress perturbation incurred by the earthquake. We explore a range of possibilities of friction laws and rheology via a linear combination of strain rate in finite volumes and slip velocity on ruptured faults. We obtain the slip distribution that can best explain our geodetic measurements using outlier-insensitive hierarchical Bayesian model, to better understand different mechanisms behind the localized shallow after slip and distributed deformation. Our results indicate that complex interactions between the subduction interface and the overlying fault system play an important role in causing such large-scale deformation during and after the earthquake event.

  6. Types of hydrogeological response to large-scale explosions and earthquakes

    NASA Astrophysics Data System (ADS)

    Gorbunova, Ella; Vinogradov, Evgeny; Besedina, Alina; Martynov, Vasilii

    2017-04-01

    Hydrogeological response to anthropogenic and natural impact indicates massif properties and mode of deformation. We studied uneven-aged aquifers that had been unsealed at the Semipalatinsk testing area (Kazakhstan) and geophysical observatory "Mikhnevo" at the Moscow region (Russia). Data was collected during long-term underground water monitoring that was carried out in 1983-1989 when large-scale underground nuclear explosions were realized. Precise observations of underground water response to distant earthquakes waves passage at GPO "Mikhnevo" have been conducted since 2008. One of the goals of the study was to mark out main types of either dynamic or irreversible spatial-temporal underground water response to large-scale explosions and to compare them with those of earthquakes impact as it had been presented in different papers. As far as nobody really knows hydrogeological processes that occur at the earthquake source it's especially important to analyze experimental data of groundwater level variations that was carried close to epicenter first minutes to hours after explosions. We found that hydrogeodynamic reaction strongly depends on initial geological and hydrogeological conditions as far as on seismic impact parameters. In the near area post-dynamic variations can lead to either excess pressure dome or depression cone forming that results of aquifer drainage due to rock massif fracturing. In the far area explosion effect is comparable with the one of distant earthquake and provides dynamic water level oscillations. Precise monitoring at the "Mikhnevo" area was conducted in the platform conditions far from active faults thus we consider it as a purely calm area far from earthquake sources. Both dynamic and irreversible water level change seem to form power dependence on vertical peak ground displacement velocity due to wave passage. Further research will be aimed at transition close-to-far area to identify a criterion that determines either irreversible

  7. Large-scale unloading processes preceding the 2015 Mw 8.4 Illapel, Chile earthquake

    NASA Astrophysics Data System (ADS)

    Huang, H.; Meng, L.

    2017-12-01

    Foreshocks and/or slow slip are observed to accelerate before some recent large earthquakes. However, it is still controversial regarding the universality of precursory signals and their value in hazard assessment or mitigation. On 16 September 2015, the Mw 8.4 Illapel earthquake ruptured a section of the subduction thrust on the west coast of central Chile. Small earthquakes are important in resolving possible precursors but are often incomplete in routine catalogs. Here, we employ the matched filter technique to recover the undocumented small events in a 4-years period before the Illapel mainshock. We augment the template dataset from Chilean Seismological Center (CSN) with previously found new repeating aftershocks in the study area. We detect a total of 17658 events in the 4-years period before the mainshock, 6.3 times more than the CSN catalog. The magnitudes of detected events are determined according to different magnitude-amplitude relations estimated at different stations. Among the enhanced catalog, 183 repeating earthquakes are identified before the mainshock. Repeating earthquakes are located at both the northern and southern sides of the principal coseismic slip zone. The seismicity and aseismic slip progressively accelerate in a small low-coupling area around the epicenter starting from 140 days before the mainshock. The acceleration leads to a M 5.3 event 36 days before the mainshock, then followed by a relative quiescence in both seismicity and slow slip until the mainshock. This may correspond to a slow aseismic nucleation phase after the slow-slip transient ends. In addition, to the north of the mainshock rupture area, the last aseismic-slip episode occurs within 175-95 days before the mainshock and accumulates the largest amount of slip in the observation period. The simultaneous occurrence of slow slip over a large area indicates a large-scale unloading process preceding the mainshock. In contrast, in a region 70-150 km south of the mainshock

  8. Large earthquake rates from geologic, geodetic, and seismological perspectives

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes

  9. Earthquakes drive large-scale submarine canyon development and sediment supply to deep-ocean basins.

    PubMed

    Mountjoy, Joshu J; Howarth, Jamie D; Orpin, Alan R; Barnes, Philip M; Bowden, David A; Rowden, Ashley A; Schimel, Alexandre C G; Holden, Caroline; Horgan, Huw J; Nodder, Scott D; Patton, Jason R; Lamarche, Geoffroy; Gerstenberger, Matthew; Micallef, Aaron; Pallentin, Arne; Kane, Tim

    2018-03-01

    Although the global flux of sediment and carbon from land to the coastal ocean is well known, the volume of material that reaches the deep ocean-the ultimate sink-and the mechanisms by which it is transferred are poorly documented. Using a globally unique data set of repeat seafloor measurements and samples, we show that the moment magnitude ( M w ) 7.8 November 2016 Kaikōura earthquake (New Zealand) triggered widespread landslides in a submarine canyon, causing a powerful "canyon flushing" event and turbidity current that traveled >680 km along one of the world's longest deep-sea channels. These observations provide the first quantification of seafloor landscape change and large-scale sediment transport associated with an earthquake-triggered full canyon flushing event. The calculated interevent time of ~140 years indicates a canyon incision rate of 40 mm year -1 , substantially higher than that of most terrestrial rivers, while synchronously transferring large volumes of sediment [850 metric megatons (Mt)] and organic carbon (7 Mt) to the deep ocean. These observations demonstrate that earthquake-triggered canyon flushing is a primary driver of submarine canyon development and material transfer from active continental margins to the deep ocean.

  10. Earthquakes drive large-scale submarine canyon development and sediment supply to deep-ocean basins

    PubMed Central

    Mountjoy, Joshu J.; Howarth, Jamie D.; Orpin, Alan R.; Barnes, Philip M.; Bowden, David A.; Rowden, Ashley A.; Schimel, Alexandre C. G.; Holden, Caroline; Horgan, Huw J.; Nodder, Scott D.; Patton, Jason R.; Lamarche, Geoffroy; Gerstenberger, Matthew; Micallef, Aaron; Pallentin, Arne; Kane, Tim

    2018-01-01

    Although the global flux of sediment and carbon from land to the coastal ocean is well known, the volume of material that reaches the deep ocean—the ultimate sink—and the mechanisms by which it is transferred are poorly documented. Using a globally unique data set of repeat seafloor measurements and samples, we show that the moment magnitude (Mw) 7.8 November 2016 Kaikōura earthquake (New Zealand) triggered widespread landslides in a submarine canyon, causing a powerful “canyon flushing” event and turbidity current that traveled >680 km along one of the world’s longest deep-sea channels. These observations provide the first quantification of seafloor landscape change and large-scale sediment transport associated with an earthquake-triggered full canyon flushing event. The calculated interevent time of ~140 years indicates a canyon incision rate of 40 mm year−1, substantially higher than that of most terrestrial rivers, while synchronously transferring large volumes of sediment [850 metric megatons (Mt)] and organic carbon (7 Mt) to the deep ocean. These observations demonstrate that earthquake-triggered canyon flushing is a primary driver of submarine canyon development and material transfer from active continental margins to the deep ocean. PMID:29546245

  11. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  12. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  13. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  14. Induced earthquake magnitudes are as large as (statistically) expected

    USGS Publications Warehouse

    Van Der Elst, Nicholas; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas; Hosseini, S. Mehran

    2016-01-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  15. Scaling and spatial complementarity of tectonic earthquake swarms

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Jónsson, Sigurjón; Hensch, Martin; Metzger, Sabrina; Jakobsdóttir, Steinunn S.; Maccaferri, Francesco; Corbi, Fabio; Dahm, Torsten

    2018-01-01

    Tectonic earthquake swarms (TES) often coincide with aseismic slip and sometimes precede damaging earthquakes. In spite of recent progress in understanding the significance and properties of TES at plate boundaries, their mechanics and scaling are still largely uncertain. Here we evaluate several TES that occurred during the past 20 years on a transform plate boundary in North Iceland. We show that the swarms complement each other spatially with later swarms discouraged from fault segments activated by earlier swarms, which suggests efficient strain release and aseismic slip. The fault area illuminated by earthquakes during swarms may be more representative of the total moment release than the cumulative moment of the swarm earthquakes. We use these findings and other published results from a variety of tectonic settings to discuss general scaling properties for TES. The results indicate that the importance of TES in releasing tectonic strain at plate boundaries may have been underestimated.

  16. Reconsidering earthquake scaling

    USGS Publications Warehouse

    Gomberg, Joan S.; Wech, Aaron G.; Creager, Kenneth; Obara, K.; Agnew, Duncan

    2016-01-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  17. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  18. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  19. Earthquake Scaling Relations

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Boettcher, M.; Richardson, E.

    2002-12-01

    Using scaling relations to understand nonlinear geosystems has been an enduring theme of Don Turcotte's research. In particular, his studies of scaling in active fault systems have led to a series of insights about the underlying physics of earthquakes. This presentation will review some recent progress in developing scaling relations for several key aspects of earthquake behavior, including the inner and outer scales of dynamic fault rupture and the energetics of the rupture process. The proximate observations of mining-induced, friction-controlled events obtained from in-mine seismic networks have revealed a lower seismicity cutoff at a seismic moment Mmin near 109 Nm and a corresponding upper frequency cutoff near 200 Hz, which we interpret in terms of a critical slip distance for frictional drop of about 10-4 m. Above this cutoff, the apparent stress scales as M1/6 up to magnitudes of 4-5, consistent with other near-source studies in this magnitude range (see special session S07, this meeting). Such a relationship suggests a damage model in which apparent fracture energy scales with the stress intensity factor at the crack tip. Under the assumption of constant stress drop, this model implies an increase in rupture velocity with seismic moment, which successfully predicts the observed variation in corner frequency and maximum particle velocity. Global observations of oceanic transform faults (OTFs) allow us to investigate a situation where the outer scale of earthquake size may be controlled by dynamics (as opposed to geologic heterogeneity). The seismicity data imply that the effective area for OTF moment release, AE, depends on the thermal state of the fault but is otherwise independent of fault's average slip rate; i.e., AE ~ AT, where AT is the area above a reference isotherm. The data are consistent with β = 1/2 below an upper cutoff moment Mmax that increases with AT and yield the interesting scaling relation Amax ~ AT1/2. Taken together, the OTF

  20. Long-term dynamics of hawaiian volcanoes inferred by large-scale relative relocations of earthquakes

    NASA Astrophysics Data System (ADS)

    Got, J.-L.; Okubo, P.

    2003-04-01

    We investigated the microseismicity recorded in an active volcano to infer information concerning the volcano structure and long-term dynamics, by using relative relocations and focal mechanisms of microearthquakes. 32000 earthquakes of Mauna Loa and Kilauea volcanoes were recorded by more than 8 stations of the Hawaiian Volcano Observatory seismic network between 1988 and 1999. We studied 17000 of these events and relocated more than 70% with an accuracy ranging from 10 to 500 meters. About 75% of these relocated events are located in the vicinity of subhorizontal decollement planes, at 8 to 11 km depth. However, the striking features revealed by these relocation results are steep south-east dipping fault planes working as reverse faults, clearly located below the decollement plane and which intersect it. If this decollement plane coincides with the pre-Mauna Loa seafloor, as hypothesized by numerous authors, such reverse faults rupture the pre-Mauna Loa oceanic crust. The weight of the volcano and pressure in the magma storage system are possible causes of these ruptures, fully compatible with the local stress tensor computed by Gillard et al. (1996). Reverse faults are suspected of producing scarps revealed by km-long horizontal slip-perpendicular lineations along the decollement surface, and therefore large-scale roughness, asperities and normal stress variations. These are capable of generating stick-slip, large magnitude earthquakes, the spatial microseismic pattern observed in the south flank of Kilauea volcano, and Hilina-type instabilities. Ruptures intersecting the decollement surface, causing its large-scale roughness, may be an important parameter controlling the growth of Hawaiian volcanoes. Are there more or less rough decollement planes existing near the base of other volcanoes, such as Piton de la Fournaise or Etna, and able to explain part of their deformation and seismicity ?

  1. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  2. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  3. An evaluation of Health of the Nation Outcome Scales data to inform psychiatric morbidity following the Canterbury earthquakes.

    PubMed

    Beaglehole, Ben; Frampton, Chris M; Boden, Joseph M; Mulder, Roger T; Bell, Caroline J

    2017-11-01

    Following the onset of the Canterbury, New Zealand earthquakes, there were widespread concerns that mental health services were under severe strain as a result of adverse consequences on mental health. We therefore examined Health of the Nation Outcome Scales data to see whether this could inform our understanding of the impact of the Canterbury earthquakes on patients attending local specialist mental health services. Health of the Nation Outcome Scales admission data were analysed for Canterbury mental health services prior to and following the Canterbury earthquakes. These findings were compared to Health of the Nation Outcome Scales admission data from seven other large District Health Boards to delineate local from national trends. Percentage changes in admission numbers were also calculated before and after the earthquakes for Canterbury and the seven other large district health boards. Admission Health of the Nation Outcome Scales scores in Canterbury increased after the earthquakes for adult inpatient and community services, old age inpatient and community services, and Child and Adolescent inpatient services compared to the seven other large district health boards. Admission Health of the Nation Outcome Scales scores for Child and Adolescent community services did not change significantly, while admission Health of the Nation Outcome Scales scores for Alcohol and Drug services in Canterbury fell compared to other large district health boards. Subscale analysis showed that the majority of Health of the Nation Outcome Scales subscales contributed to the overall increases found. Percentage changes in admission numbers for the Canterbury District Health Board and the seven other large district health boards before and after the earthquakes were largely comparable with the exception of admissions to inpatient services for the group aged 4-17 years which showed a large increase. The Canterbury earthquakes were followed by an increase in Health of the Nation

  4. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  5. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  6. Historical and recent large megathrust earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Madariaga, R.

    2018-05-01

    Recent earthquakes in Chile, 2014, Mw 8.2 Iquique, 2015, Mw 8.3 Illapel and 2016, Mw 7.6 Chiloé have put in evidence some problems with the straightforward application of ideas about seismic gaps, earthquake periodicity and the general forecast of large megathrust earthquakes. In northern Chile, before the 2014 Iquique earthquake 4 large earthquakes were reported in written chronicles, 1877, 1786, 1615 and 1543; in North-Central Chile, before the 2015 Illapel event, 3 large earthquakes 1943, 1880, 1730 were reported; and the 2016 Chiloé earthquake occurred in the southern zone of the 1960 Valdivia megathrust rupture, where other large earthquakes occurred in 1575, 1737 and 1837. The periodicity of these events has been proposed as a good long-term forecasting. However, the seismological aspects of historical Chilean earthquakes were inferred mainly from old chronicles written before subduction in Chile was discovered. Here we use the original description of earthquakes to re-analyze the historical archives. Our interpretation shows that a-priori ideas, like seismic gaps and characteristic earthquakes, influenced the estimation of magnitude, location and rupture area of the older Chilean events. On the other hand, the advance in the characterization of the rheological aspects that controlled the contact between Nazca and South-American plate and the study of tsunami effects provide better estimations of the location of historical earthquakes along the seismogenic plate interface. Our re-interpretation of historical earthquakes shows a large diversity of earthquakes types; there is a major difference between giant earthquakes that break the entire plate interface and those of Mw 8.0 that only break a portion of it.

  7. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  8. Scaling A Moment-Rate Function For Small To Large Magnitude Events

    NASA Astrophysics Data System (ADS)

    Archuleta, Ralph; Ji, Chen

    2017-04-01

    Since the 1980's seismologists have recognized that peak ground acceleration (PGA) and peak ground velocity (PGV) scale differently with magnitude for large and moderate earthquakes. In a recent paper (Archuleta and Ji, GRL 2016) we introduced an apparent moment-rate function (aMRF) that accurately predicts the scaling with magnitude of PGA, PGV, PWA (Wood-Anderson Displacement) and the ratio PGA/2πPGV (dominant frequency) for earthquakes 3.3 ≤ M ≤ 5.3. This apparent moment-rate function is controlled by two temporal parameters, tp and td, which are related to the time for the moment-rate function to reach its peak amplitude and the total duration of the earthquake, respectively. These two temporal parameters lead to a Fourier amplitude spectrum (FAS) of displacement that has two corners in between which the spectral amplitudes decay as 1/f, f denotes frequency. At higher or lower frequencies, the FAS of the aMRF looks like a single-corner Aki-Brune omega squared spectrum. However, in the presence of attenuation the higher corner is almost certainly masked. Attempting to correct the spectrum to an Aki-Brune omega-squared spectrum will produce an "apparent" corner frequency that falls between the double corner frequency of the aMRF. We reason that the two corners of the aMRF are the reason that seismologists deduce a stress drop (e.g., Allmann and Shearer, JGR 2009) that is generally much smaller than the stress parameter used to produce ground motions from stochastic simulations (e.g., Boore, 2003 Pageoph.). The presence of two corners for the smaller magnitude earthquakes leads to several questions. Can deconvolution be successfully used to determine scaling from small to large earthquakes? Equivalently will large earthquakes have a double corner? If large earthquakes are the sum of many smaller magnitude earthquakes, what should the displacement FAS look like for a large magnitude earthquake? Can a combination of such a double-corner spectrum and random

  9. Sea-level changes before large earthquakes

    USGS Publications Warehouse

    Wyss, M.

    1978-01-01

    Changes in sea level have long been used as a measure of local uplift and subsidence associated with large earthquakes. For instance, in 1835, the British naturalist Charles Darwin observed that sea level dropped by 2.7 meters during the large earthquake in Concepcion, CHile. From this piece of evidence and the terraces along the beach that he saw, Darwin concluded that the Andes had grown to their present height through earthquakes. Much more recently, George Plafker and James C. Savage of the U.S Geological Survey have shown, from barnacle lines, that the great 1960 Chile and the 1964 Alaska earthquakes caused several meters of vertical displacement of the shoreline. 

  10. Scale-invariant structure of energy fluctuations in real earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Chang, Zhe; Wang, Huanyu; Lu, Hong

    2017-11-01

    Earthquakes are obviously complex phenomena associated with complicated spatiotemporal correlations, and they are generally characterized by two power laws: the Gutenberg-Richter (GR) and the Omori-Utsu laws. However, an important challenge has been to explain two apparently contrasting features: the GR and Omori-Utsu laws are scale-invariant and unaffected by energy or time scales, whereas earthquakes occasionally exhibit a characteristic energy or time scale, such as with asperity events. In this paper, three high-quality datasets on earthquakes were used to calculate the earthquake energy fluctuations at various spatiotemporal scales, and the results reveal the correlations between seismic events regardless of their critical or characteristic features. The probability density functions (PDFs) of the fluctuations exhibit evidence of another scaling that behaves as a q-Gaussian rather than random process. The scaling behaviors are observed for scales spanning three orders of magnitude. Considering the spatial heterogeneities in a real earthquake fault, we propose an inhomogeneous Olami-Feder-Christensen (OFC) model to describe the statistical properties of real earthquakes. The numerical simulations show that the inhomogeneous OFC model shares the same statistical properties with real earthquakes.

  11. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  12. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  13. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery ofmore » damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.« less

  14. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  15. Possible seasonality in large deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Shearer, Peter M.

    2015-09-01

    Large deep-focus earthquakes (magnitude > 7.0, depth > 500 km) have exhibited strong seasonality in their occurrence times since the beginning of global earthquake catalogs. Of 60 such events from 1900 to the present, 42 have occurred in the middle half of each year. The seasonality appears strongest in the northwest Pacific subduction zones and weakest in the Tonga region. Taken at face value, the surplus of northern hemisphere summer events is statistically significant, but due to the ex post facto hypothesis testing, the absence of seasonality in smaller deep earthquakes, and the lack of a known physical triggering mechanism, we cannot rule out that the observed seasonality is just random chance. However, we can make a testable prediction of seasonality in future large deep-focus earthquakes, which, given likely earthquake occurrence rates, should be verified or falsified within a few decades. If confirmed, deep earthquake seasonality would challenge our current understanding of deep earthquakes.

  16. Outline of the 2016 Kumamoto, Japan, Earthquakes and lessons for a large urban earthquake in Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Hirata, N.

    2016-12-01

    A series of devastating earthquakes hit Kumamoto districts in Kyushu, Japan, in April, 2016. The M6.5 event occurred at 21:26 on April 14th (JST) and, 28 hours later, the M7.3 event occurred at 01:25 on April 17th (JST) at almost the same location with a depth of 10 km. The both earthquakes were felt with a seismic intensity of 7 in Japan Metrological Agency (JMA) scale at Mashiki Town. The intensity of 7 is the highest level by definition. Very strong accelerations are observed by the M6.5 event with 1,580 gal at KiK-net Mashiki station and 1,791 gal by the M7.3 event at Ohtsu City station. As a result, more than 8,000 houses are totally collapsed, 26,000 are heavily collapsed, and 120,000 are partially damaged. There are 49 people directly killed and 32 are indirectly killed by the quakes. The most important lesson from the Kumamoto earthquake is that a very strong ground motion may hit immediately after the first large event, say in a few days. This has serious impact to a house damaged by the first large quake. In the 2016 Kumamoto sequence there are also many strong aftershocks including 4 M5.8-5.9 events till April 18th. More than 180,000 people, at most, took shelter because of scaring many strong aftershocks. I will discuss both natural and human aspects of the Kumamoto earthquake disaster by the in-land shallow large earthquakes suggesting lessons for the large Metropolitan Earthquakes in Tokyo, Japan.

  17. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    NASA Astrophysics Data System (ADS)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  18. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  19. Large Earthquakes Disrupt Groundwater System by Breaching Aquitards

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.; Manga, M.; Liao, X.; Wang, L. P.

    2016-12-01

    Changes of groundwater system by large earthquakes are widely recognized. Some changes have been attributed to increases in the vertical permeability but basic questions remain: How do increases in the vertical permeability occur? How frequent do they occur? How fast does the vertical permeability recover after the earthquake? Is there a quantitative measure for detecting the occurrence of aquitard breaching? Here we attempt to answer these questions by examining data accumulated in the past 15 years. Analyses of increased stream discharges and their geochemistry after large earthquakes show evidence that the excess water originates from groundwater released from high elevations by large increase of the vertical permeability. Water-level data from a dense network of clustered wells in a sedimentary basin near the epicenter of the 1999 M7.6 Chi-Chi earthquake in western Taiwan show that, while most confined aquifers remained confined after the earthquake, about 10% of the clustered wells show evidence of coseismic breaching of aquitards and a great increase of the vertical permeability. Water level in wells without evidence of coseismic breaching of aquitards show similar tidal response before and after the earthquake; wells with evidence of coseismic breaching of aquitards, on the other hand, show distinctly different tidal response before and after the earthquake and that the aquifers became hydraulically connected for many months thereafter. Breaching of aquitards by large earthquakes has significant implications for a number of societal issues such as the safety of water resources, the security of underground waste repositories, and the production of oil and gas. The method demonstrated here may be used for detecting the occurrence of aquitard breaching by large earthquakes in other seismically active areas.

  20. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  1. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  2. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  3. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    NASA Astrophysics Data System (ADS)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  4. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  5. Local observations of the onset of a large earthquake: 28 June 1992 Landers, California

    USGS Publications Warehouse

    Abercrombie, Richael; Mori, Jim

    1994-01-01

    The Landers earthquake (MW 7.3) of 28 June 1992 had a very emergent onset. The first large amplitude arrivals are delayed by about 3 sec with respect to the origin time, and are preceded by smaller-scale slip. Other large earthquakes have been observed to have similar emergent onsets, but the Landers event is one of the first to be well recorded on nearby stations. We used these recordings to investigate the spatial relationship between the hypocenter and the onset of the large energy release, and to determine the slip function of the 3-sec nucleation process. Relative location of the onset of the large energy release with respect to the initial hypocenter indicates its source was between 1 and 4 km north of the hypocenter and delayed by approximately 2.5 sec. Three-station array analysis of the P wave shows that the large amplitude onset arrives with a faster apparent velocity compared to the first arrivals, indicating that the large amplitude source was several kilometers deeper than the initial onset. An ML 2.8 foreshock, located close to the hypocenter, was used as an empirical Green's function to correct for path and site effects from the first 3 sec of the mainshock seismogram. The resultant deconvolution produced a slip function that showed two subevents preceding the main energy release, an MW4.4 followed by an MW 5.6. These subevents do not appear anomalous in comparison to simple moderate-sized earthquakes, suggesting that they were normal events which just triggered or grew into a much larger earthquake. If small and moderate-sized earthquakes commonly “detonate” much larger events, this implies that the dynamic stresses during earthquake rupture are at least as important as long-term static stresses in causing earthquakes, and the prospects of reliable earthquake prediction from premonitory phenomena are not improved.

  6. Tectonically Induced Anomalies Without Large Earthquake Occurrences

    NASA Astrophysics Data System (ADS)

    Shi, Zheming; Wang, Guangcai; Liu, Chenglong; Che, Yongtai

    2017-06-01

    In this study, we documented a case involving large-scale macroscopic anomalies in the Xichang area, southwestern Sichuan Province, China, from May to June of 2002, after which no major earthquake occurred. During our field survey in 2002, we found that the timing of the high-frequency occurrence of groundwater anomalies was in good agreement with those of animal anomalies. Spatially, the groundwater and animal anomalies were distributed along the Anninghe-Zemuhe fault zone. Furthermore, the groundwater level was elevated in the northwest part of the Zemuhe fault and depressed in the southeast part of the Zemuhe fault zone, with a border somewhere between Puge and Ningnan Counties. Combined with microscopic groundwater, geodetic and seismic activity data, we infer that the anomalies in the Xichang area were the result of increasing tectonic activity in the Sichuan-Yunnan block. In addition, groundwater data may be used as a good indicator of tectonic activity. This case tells us that there is no direct relationship between an earthquake and these anomalies. In most cases, the vast majority of the anomalies, including microscopic and macroscopic anomalies, are caused by tectonic activity. That is, these anomalies could occur under the effects of tectonic activity, but they do not necessarily relate to the occurrence of earthquakes.

  7. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  8. Simulating Large-Scale Earthquake Dynamic Rupture Scenarios On Natural Fault Zones Using the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2014-05-01

    In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.

  9. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  10. Field Observations of Precursors to Large Earthquakes: Interpreting and Verifying Their Causes

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sacks, S. I.; Rydelek, P. A.; Smith, D. E.; Takanami, T.

    2017-12-01

    Many reports of precursory anomalies before large earthquakes exist. However, it has proven elusive to even identify these signals before their actual occurrences. They often only become evident in retrospect. A probabilistic cellular automaton model (Sacks and Rydelek, 1995) explains many of the statistical and dynamic natures of earthquakes including the observed b-value decrease towards a large earthquake or a small stress perturbation to have effect on earthquake occurrence pattern. It also reproduces dynamic characters of each earthquake rupture. This model is useful in gaining insights on causal relationship behind complexities. For example, some reported cases of background seismicity quiescence before a main shock only seen for events larger than M=3 4 at years time scale can be reproduced by this model, if only a small fraction ( 2%) of the component cells are strengthened by a small amount. Such an enhancement may physically occur if a tiny and scattered portion of the seismogenic crust undergoes dilatancy hardening. Such a process to occur will be dependent on the fluid migration and microcracks developments under tectonic loading. Eventual large earthquake faulting will be promoted by the intrusion of excess water from surrounding rocks into the zone capable of cascading slips to a large area. We propose this process manifests itself on the surface as hydrologic, geochemical, or macroscopic anomalies, for which so many reports exist. We infer from seismicity that the eastern Nankai Trough (Tokai) area of central Japan is already in the stage of M-dependent seismic quiescence. Therefore, we advocate that new observations sensitive to detecting water migration in Tokai should be implemented. In particular, vertical component strain, gravity, and/or electrical conductivity, should be observed for verification.

  11. Systematic observations of the slip pulse properties of large earthquake ruptures

    USGS Publications Warehouse

    Melgar, Diego; Hayes, Gavin

    2017-01-01

    In earthquake dynamics there are two end member models of rupture: propagating cracks and self-healing pulses. These arise due to different properties of faults and have implications for seismic hazard; rupture mode controls near-field strong ground motions. Past studies favor the pulse-like mode of rupture; however, due to a variety of limitations, it has proven difficult to systematically establish their kinematic properties. Here we synthesize observations from a database of >150 rupture models of earthquakes spanning M7–M9 processed in a uniform manner and show the magnitude scaling properties of these slip pulses indicates self-similarity. Further, we find that large and very large events are statistically distinguishable relatively early (at ~15 s) in the rupture process. This suggests that with dense regional geophysical networks strong ground motions from a large rupture can be identified before their onset across the source region.

  12. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  13. Global variations of large megathrust earthquake rupture characteristics

    PubMed Central

    Kanamori, Hiroo

    2018-01-01

    Despite the surge of great earthquakes along subduction zones over the last decade and advances in observations and analysis techniques, it remains unclear whether earthquake complexity is primarily controlled by persistent fault properties or by dynamics of the failure process. We introduce the radiated energy enhancement factor (REEF), given by the ratio of an event’s directly measured radiated energy to the calculated minimum radiated energy for a source with the same seismic moment and duration, to quantify the rupture complexity. The REEF measurements for 119 large [moment magnitude (Mw) 7.0 to 9.2] megathrust earthquakes distributed globally show marked systematic regional patterns, suggesting that the rupture complexity is strongly influenced by persistent geological factors. We characterize this as the existence of smooth and rough rupture patches with varying interpatch separation, along with failure dynamics producing triggering interactions that augment the regional influences on large events. We present an improved asperity scenario incorporating both effects and categorize global subduction zones and great earthquakes based on their REEF values and slip patterns. Giant earthquakes rupturing over several hundred kilometers can occur in regions with low-REEF patches and small interpatch spacing, such as for the 1960 Chile, 1964 Alaska, and 2011 Tohoku earthquakes, or in regions with high-REEF patches and large interpatch spacing as in the case for the 2004 Sumatra and 1906 Ecuador-Colombia earthquakes. Thus, combining seismic magnitude Mw and REEF, we provide a quantitative framework to better represent the span of rupture characteristics of great earthquakes and to understand global seismicity. PMID:29750186

  14. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  15. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  16. Local Deformation Precursors of Large Earthquakes Derived from GNSS Observation Data

    NASA Astrophysics Data System (ADS)

    Kaftan, Vladimir; Melnikov, Andrey

    2017-12-01

    Research on deformation precursors of earthquakes was of immediate interest from the middle to the end of the previous century. The repeated conventional geodetic measurements, such as precise levelling and linear-angular networks, were used for the study. Many examples of studies referenced to strong seismic events using conventional geodetic techniques are presented in [T. Rikitake, 1976]. One of the first case studies of geodetic earthquake precursors was done by Yu.A. Meshcheryakov [1968]. Rare repetitions, insufficient densities and locations of control geodetic networks made difficult predicting future places and times of earthquakes occurrences. Intensive development of Global Navigation Satellite Systems (GNSS) during the recent decades makes research more effective. The results of GNSS observations in areas of three large earthquakes (Napa M6.1, USA, 2014; El Mayor Cucapah M7.2, USA, 2010; and Parkfield M6.0, USA, 2004) are treated and presented in the paper. The characteristics of land surface deformation before, during, and after earthquakes have been obtained. The results prove the presence of anomalous deformations near their epicentres. The temporal character of dilatation and shear strain changes show existence of spatial heterogeneity of deformation of the Earth’s surface from months to years before the main shock close to it and at some distance from it. The revealed heterogeneities can be considered as deformation precursors of strong earthquakes. According to historical data and proper research values of critical deformations which are offered to be used for seismic danger scale creation based on continuous GNSS observations are received in a reference to the mentioned large earthquakes. It is shown that the approach has restrictions owing to uncertainty of the moment in the beginning of deformation accumulation and the place of expectation of another seismic event. Verification and clarification of the derived conclusions are proposed.

  17. The Role of Deep Creep in the Timing of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Sammis, C. G.; Smith, S. W.

    2012-12-01

    The observed temporal clustering of the world's largest earthquakes has been largely discounted for two reasons: a) it is consistent with Poisson clustering, and b) no physical mechanism leading to such clustering has been proposed. This lack of a mechanism arises primarily because the static stress transfer mechanism, commonly used to explain aftershocks and the clustering of large events on localized fault networks, does not work at global distances. However, there is recent observational evidence that the surface waves from large earthquakes trigger non-volcanic tremor at the base of distant fault zones at global distances. Based on these observations, we develop a simple non-linear coupled oscillator model that shows how the triggering of such tremor can lead to the synchronization of large earthquakes on a global scale. A basic assumption of the model is that induced tremor is a proxy for deep creep that advances the seismic cycle of the fault. We support this hypothesis by demonstrating that the 2010 Maule Chile and the 2011 Fukushima Japan earthquakes, which have been shown to induce tremor on the Parkfield segment of the San Andreas Fault, also produce changes in off-fault seismicity that are spatially and temporally consistent with episodes of deep creep on the fault. The observed spatial pattern can be simulated using an Okada dislocation model for deep creep (below 20 km) on the fault plane in which the slip rate decreases from North to South consistent with surface creep measurements and deepens south of the "Parkfield asperity" as indicated by recent tremor locations. The model predicts the off-fault events should have reverse mechanism consistent with observed topography.

  18. Nowcasting Earthquakes: A Comparison of Induced Earthquakes in Oklahoma and at the Geysers, California

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.

    2018-01-01

    Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.

  19. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    PubMed

    duPont, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake.

  20. Scaling of seismic memory with earthquake size

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene

    2012-07-01

    It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.

  1. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  2. Spectral scaling of the aftershocks of the Tocopilla 2007 earthquake in northern Chile

    NASA Astrophysics Data System (ADS)

    Lancieri, M.; Madariaga, R.; Bonilla, F.

    2012-04-01

    We study the scaling of spectral properties of a set of 68 aftershocks of the 2007 November 14 Tocopilla (M 7.8) earthquake in northern Chile. These are all subduction events with similar reverse faulting focal mechanism that were recorded by a homogenous network of continuously recording strong motion instruments. The seismic moment and the corner frequency are obtained assuming that the aftershocks satisfy an inverse omega-square spectral decay; radiated energy is computed integrating the square velocity spectrum corrected for attenuation at high frequencies and for the finite bandwidth effect. Using a graphical approach, we test the scaling of seismic spectrum, and the scale invariance of the apparent stress drop with the earthquake size. To test whether the Tocopilla aftershocks scale with a single parameter, we introduce a non-dimensional number, ?, that should be constant if earthquakes are self-similar. For the Tocopilla aftershocks, Cr varies by a factor of 2. More interestingly, Cr for the aftershocks is close to 2, the value that is expected for events that are approximately modelled by a circular crack. Thus, in spite of obvious differences in waveforms, the aftershocks of the Tocopilla earthquake are self-similar. The main shock is different because its records contain large near-field waves. Finally, we investigate the scaling of energy release rate, Gc, with the slip. We estimated Gc from our previous estimates of the source parameters, assuming a simple circular crack model. We find that Gc values scale with the slip, and are in good agreement with those found by Abercrombie and Rice for the Northridge aftershocks.

  3. Multi-scale heterogeneity of the 2011 Great Tohoku-oki Earthquake from dynamic simulations

    NASA Astrophysics Data System (ADS)

    Aochi, H.; Ide, S.

    2011-12-01

    In order to explain the scaling issues of earthquakes of different sizes, multi-scale heterogeneity conception is necessary to characterize earthquake faulting property (Ide and Aochi, JGR, 2005; Aochi and Ide, JGR, 2009).The 2011 Great Tohoku-oki earthquake (M9) is characterized by a slow initial phase of about M7, a M8 class deep rupture, and a M9 main rupture with quite large slip near the trench (e.g. Ide et al., Science, 2011) as well as the presence of foreshocks. We dynamically model these features based on the multi-scale conception. We suppose a significantly large fracture energy (corresponding to slip-weakening distance of 3.2 m) in most of the fault dimension to represent the M9 rupture. However we give local heterogeneity with relatively small circular patches of smaller fracture energy, by assuming the linear scaling relation between the radius and fracture energy. The calculation is carried out using 3D Boundary Integral Equation Method. We first begin only with the mainshock (Aochi and Ide, EPS, 2011), but later we find it important to take into account of a series of foreshocks since the 9th March (M7.4). The smaller patches including the foreshock area are necessary to launch the M9 rupture area of large fracture energy. We then simulate the ground motion in low frequencies using Finite Difference Method. Qualitatively, the observed tendency is consistent with our simulations, in the meaning of the transition from the central part to the southern part in low frequencies (10 - 20 sec). At higher frequencies (1-10 sec), further small asperities are inferred in the observed signals, and this feature matches well with our multi-scale conception.

  4. Feasibility study of short-term earthquake prediction using ionospheric anomalies immediately before large earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2017-12-01

    We showed that positive and negative electron density anomalies emerge above the fault immediately before they rupture, 40/20/10 minutes before Mw9/8/7 earthquakes (Heki, 2011 GRL; Heki and Enomoto, 2013 JGR; He and Heki 2017 JGR). These signals are stronger for earthquake with larger Mw and under higher background vertical TEC (total electron conetent) (Heki and Enomoto, 2015 JGR). The epicenter, the positive and the negative anomalies align along the local geomagnetic field (He and Heki, 2016 GRL), suggesting electric fields within ionosphere are responsible for making the anomalies (Kuo et al., 2014 JGR; Kelley et al., 2017 JGR). Here we suppose the next Nankai Trough earthquake that may occur within a few tens of years in Southwest Japan, and will discuss if we can recognize its preseismic signatures in TEC by real-time observations with GNSS.During high geomagnetic activities, large-scale traveling ionospheric disturbances (LSTID) often propagate from auroral ovals toward mid-latitude regions, and leave similar signatures to preseismic anomalies. This is a main obstacle to use preseismic TEC changes for practical short-term earthquake prediction. In this presentation, we show that the same anomalies appeared 40 minutes before the mainshock above northern Australia, the geomagnetically conjugate point of the 2011 Tohoku-oki earthquake epicenter. This not only demonstrates that electric fields play a role in making the preseismic TEC anomalies, but also offers a possibility to discriminate preseismic anomalies from those caused by LSTID. By monitoring TEC in the conjugate areas in the two hemisphere, we can recognize anomalies with simultaneous onset as those caused by within-ionosphere electric fields (e.g. preseismic anomalies, night-time MSTID) and anomalies without simultaneous onset as gravity-wave origin disturbances (e.g. LSTID, daytime MSTID).

  5. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    NASA Astrophysics Data System (ADS)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  6. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    NASA Astrophysics Data System (ADS)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  7. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  8. S-net : Construction of large scale seafloor observatory network for tsunamis and earthquakes along the Japan Trench

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Uehira, K.; Kanazawa, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Nakamura, T.; Shinohara, M.; Yamada, T.

    2017-12-01

    NIED has launched the project of constructing a seafloor observatory network for tsunamis and earthquakes after the occurrence of the 2011 Tohoku Earthquake to enhance reliability of early warnings of tsunamis and earthquakes. The observatory network was named "S-net". The S-net project has been financially supported by MEXT.The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,500 km. The S-net covers the focal region of the 2011 Tohoku Earthquake and its vicinity regions. Each observatory equips two units of a high sensitive pressure gauges as a tsunami meter and four sets of three-component seismometers. The S-net is composed of six segment networks. Five of six segment networks had been already installed. Installation of the last segment network covering the outer rise area have been finally finished by the end of FY2016. The outer rise segment has special features like no other five segments of the S-net. Those features are deep water and long distance. Most of 25 observatories on the outer rise segment are located at the depth of deeper than 6,000m WD. Especially, three observatories are set on the seafloor of deeper than about 7.000m WD, and then the pressure gauges capable of being used even at 8,000m WD are equipped on those three observatories. Total length of the submarine cables of the outer rise segment is about two times longer than those of the other segments. The longer the cable system is, the higher voltage supply is needed, and thus the observatories on the outer rise segment have high withstanding voltage characteristics. We employ a dispersion management line of a low loss formed by combining a plurality of optical fibers for the outer rise segment cable, in order to achieve long-distance, high-speed and large-capacity data transmission Installation of the outer rise segment was finished and then full-scale operation of S-net has started

  9. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    PubMed Central

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  10. Magnitude Estimation for Large Earthquakes from Borehole Recordings

    NASA Astrophysics Data System (ADS)

    Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.

    2012-12-01

    We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).

  11. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  12. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to

  13. Firebrands and spotting ignition in large-scale fires

    Treesearch

    Eunmo Koo; Patrick J. Pagni; David R. Weise; John P. Woycheese

    2010-01-01

    Spotting ignition by lofted firebrands is a significant mechanism of fire spread, as observed in many largescale fires. The role of firebrands in fire propagation and the important parameters involved in spot fire development are studied. Historical large-scale fires, including wind-driven urban and wildland conflagrations and post-earthquake fires are given as...

  14. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  15. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  16. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  17. An earthquake strength scale for the media and the public

    USGS Publications Warehouse

    Johnston, A.C.

    1990-01-01

    A local engineer, E.P Hailey, pointed this problem out to me shortly after the Loma Prieta earthquake. He felt that three problems limited the usefulness of magnitude in describing an earthquake to the public; (1) most people don't understand that it is not a linear scale; (2) of those who do realized the scale is not linear, very few understand the difference of a factor of ten in ground motion and 32 in energy release between points on the scale; and (3) even those who understand the first two points have trouble putting a given magnitude value into terms they can relate to. In summary, Mr. Hailey wondered why seismologists can't come up with an earthquake scale that doesn't confuse everyone and that conveys a sense of true relative size. Here, then, is m attempt to construct such a scale

  18. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  19. Automated Determination of Magnitude and Source Length of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.

    2017-12-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  20. Automated Determination of Magnitude and Source Extent of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun

    2017-04-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  1. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  2. Systematic Observations of the Slip-pulse Properties of Large Earthquake Ruptures

    NASA Astrophysics Data System (ADS)

    Melgar, D.; Hayes, G. P.

    2017-12-01

    In earthquake dynamics there are two end member models of rupture: propagating cracks and self-healing pulses. These arise due to different properties of ruptures and have implications for seismic hazard; rupture mode controls near-field strong ground motions. Past studies favor the pulse-like mode of rupture, however, due to a variety of limitations, it has proven difficult to systematically establish their kinematic properties. Here we synthesize observations from a database of >150 rupture models of earthquakes spanning M7-M9 processed in a uniform manner and show the magnitude scaling properties (rise time, pulse width, and peak slip rate) of these slip pulses indicates self-similarity. Self similarity suggests a weak form of rupture determinism, where early on in the source process broader, higher amplitude slip pulses will distinguish between events of icnreasing magnitude. Indeed, we find by analyzing the moment rate functions that large and very large events are statistically distinguishable relatively early (at 15 seconds) in the rupture process. This suggests that with dense regional geophysical networks strong ground motions from a large rupture can be identified before their onset across the source region.

  3. Analysis of the Seismicity Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2016-12-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  4. Large earthquake rupture process variations on the Middle America megathrust

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  5. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  6. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  7. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  8. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  9. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  10. The spatial distribution of earthquake stress rotations following large subduction zone earthquakes

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2017-01-01

    Rotations of the principal stress axes due to great subduction zone earthquakes have been used to infer low differential stress and near-complete stress drop. The spatial distribution of coseismic and postseismic stress rotation as a function of depth and along-strike distance is explored for three recent M ≥ 8.8 subduction megathrust earthquakes. In the down-dip direction, the largest coseismic stress rotations are found just above the Moho depth of the overriding plate. This zone has been identified as hosting large patches of large slip in great earthquakes, based on the lack of high-frequency radiated energy. The large continuous slip patches may facilitate near-complete stress drop. There is seismological evidence for high fluid pressures in the subducted slab around the Moho depth of the overriding plate, suggesting low differential stress levels in this zone due to high fluid pressure, also facilitating stress rotations. The coseismic stress rotations have similar along-strike extent as the mainshock rupture. Postseismic stress rotations tend to occur in the same locations as the coseismic stress rotations, probably due to the very low remaining differential stress following the near-complete coseismic stress drop. The spatial complexity of the observed stress changes suggests that an analytical solution for finding the differential stress from the coseismic stress rotation may be overly simplistic, and that modeling of the full spatial distribution of the mainshock static stress changes is necessary.

  11. Numerical Study of Frictional Properties and the Role of Cohesive End-Zones in Large Strike- Slip Earthquakes

    NASA Astrophysics Data System (ADS)

    Lovely, P. J.; Mutlu, O.; Pollard, D. D.

    2007-12-01

    Cohesive end-zones (CEZs) are regions of increased frictional strength and/or cohesion near the peripheries of faults that cause slip distributions to taper toward the fault-tip. Laboratory results, field observations, and theoretical models suggest an important role for CEZs in small-scale fractures and faults; however, their role in crustal-scale faulting and associated large earthquakes is less thoroughly understood. We present a numerical study of the potential role of CEZs on slip distributions in large, multi-segmented, strike-slip earthquake ruptures including the 1992 Landers Earthquake (Mw 7.2) and 1999 Hector Mine Earthquake (Mw 7.1). Displacement discontinuity is calculated using a quasi-static, 2D plane-strain boundary element (BEM) code for a homogeneous, isotropic, linear-elastic material. Friction is implemented by enforcing principles of complementarity. Model results with and without CEZs are compared with slip distributions measured by combined inversion of geodetic, strong ground motion, and teleseismic data. Stepwise and linear distributions of increasing frictional strength within CEZs are considered. The incorporation of CEZs in our model enables an improved match to slip distributions measured by inversion, suggesting that CEZs play a role in governing slip in large, strike-slip earthquakes. Additionally, we present a parametric study highlighting the very great sensitivity of modeled slip magnitude to small variations of the coefficient of friction. This result suggests that, provided a sufficiently well-constrained stress tensor and elastic moduli for the surrounding rock, relatively simple models could provide precise estimates of the magnitude of frictional strength. These results are verified by comparison with geometrically comparable finite element (FEM) models using the commercial code ABAQUS. In FEM models, friction is implemented by use of both Lagrange multipliers and penalty methods.

  12. Scaling Relations of Earthquakes on Inland Active Mega-Fault Systems

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Matsushima, S.; Azuma, T.; Irikura, K.; Kitagawa, S.

    2010-12-01

    Since 2005, The Headquarters for Earthquake Research Promotion (HERP) has been publishing 'National Seismic Hazard Maps for Japan' to provide useful information for disaster prevention countermeasures for the country and local public agencies, as well as promote public awareness of disaster prevention of earthquakes. In the course of making the year 2009 version of the map, which is the commemorate of the tenth anniversary of the settlement of the Comprehensive Basic Policy, the methods to evaluate magnitude of earthquakes, to predict strong ground motion, and to construct underground structure were investigated in the Earthquake Research Committee and its subcommittees. In order to predict the magnitude of earthquakes occurring on mega-fault systems, we examined the scaling relations for mega-fault systems using 11 earthquakes of which source processes were analyzed by waveform inversion and of which surface information was investigated. As a result, we found that the data fit in between the scaling relations of seismic moment and rupture area by Somerville et al. (1999) and Irikura and Miyake (2001). We also found that maximum displacement of surface rupture is two to three times larger than the average slip on the seismic fault and surface fault length is equal to length of the source fault. Furthermore, compiled data of the source fault shows that displacement saturates at 10m when fault length(L) is beyond 100km, L>100km. By assuming the fault width (W) to be 18km in average of inland earthquakes in Japan, and the displacement saturate at 10m for length of more than 100 km, we derived a new scaling relation between source area and seismic moment, S[km^2] = 1.0 x 10^-17 M0 [Nm] for mega-fault systems that seismic moment (M0) exceeds 1.8×10^20 Nm.

  13. Systematic Detection of Remotely Triggered Seismicity in Africa Following Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Ayorinde, A. O.; Peng, Z.; Yao, D.; Bansal, A. R.

    2016-12-01

    It is well known that large distant earthquakes can trigger micro-earthquakes/tectonic tremors during or immediately following their surface waves. Globally, triggered earthquakes have been mostly found in active plate boundary regions. It is not clear whether they could occur within stable intraplate regions in Africa as well as the active East African Rift Zone. In this study we conduct a systematic study of remote triggering in Africa following recent large earthquakes, including the 2004 Mw9.1 Sumatra and 2012 Mw8.6 Indian Ocean earthquakes. In particular, the 2012 Indian Ocean earthquake is the largest known strike slip earthquake and has triggered a global increase of magnitude larger than 5.5 earthquakes as well as numerous micro-earthquakes/tectonic tremors around the world. The entire Africa region was examined for possible remotely triggered seismicity using seismic data downloaded from the Incorporated Research Institutes for Seismology (IRIS) Data Management Center (DMC) and GFZ German Research Center for Geosciences. We apply a 5-Hz high-pass-filter to the continuous waveforms and visually identify high-frequency signals during and immediately after the large amplitude surface waves. Spectrograms are computed as additional tools to identify triggered seismicities and we further confirm them by statistical analysis comparing the high-frequency signals before and after the distant mainshocks. So far we have identified possible triggered seismicity in Botswana and northern Madagascar. This study could help to understand dynamic triggering in diverse tectonic settings of the African continent.

  14. The characteristic of the building damage from historical large earthquakes in Kyoto

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  15. National Earthquake Information Center Seismic Event Detections on Multiple Scales

    NASA Astrophysics Data System (ADS)

    Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10

  16. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  17. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  18. Slow Slip and Earthquake Nucleation in Meter-Scale Laboratory Experiments

    NASA Astrophysics Data System (ADS)

    Mclaskey, G.

    2017-12-01

    The initiation of dynamic rupture is thought to be preceded by a quasistatic nucleation phase. Observations of recent earthquakes sometimes support this by illuminating slow slip and foreshocks in the vicinity of the eventual hypocenter. I describe laboratory earthquake experiments conducted on two large-scale loading machines at Cornell University that provide insight into the way earthquake nucleation varies with normal stress, healing time, and loading rate. The larger of the two machines accommodates a 3 m long granite sample, and when loaded to 7 MPa stress levels, we observe dynamic rupture events that are preceded by a measureable nucleation zone with dimensions on the order of 1 m. The smaller machine accommodates a 0.76 m sample that is roughly the same size as the nucleation zone. On this machine, small variations in nucleation properties result in measurable differences in slip events, and we generate both dynamic rupture events (> 0.1 m/s slip rates) and slow slip events ( 0.001 to 30 mm/s slip rates). Slow events occur when instability cannot fully nucleate before reaching the sample ends. Dynamic events occur after long healing times or abrupt increases in loading rate which suggests that these factors shrink the spatial and temporal extents of the nucleation zone. Arrays of slip, strain, and ground motion sensors installed on the sample allow us to quantify seismic coupling and study details of premonitory slip and afterslip. The slow slip events we observe are primarily aseismic (less than 1% of the seismic coupling of faster events) and produce swarms of very small M -6 to M -8 events. These mechanical and seismic interactions suggest that faults with transitional behavior—where creep, small earthquakes, and tremor are often observed—could become seismically coupled if loaded rapidly, either by a slow slip front or dynamic rupture of an earthquake that nucleated elsewhere.

  19. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  20. Rapid Characterization of Large Earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Barrientos, S. E.; Team, C.

    2015-12-01

    Chile, along 3000 km of it 4200 km long coast, is regularly affected by very large earthquakes (up to magnitude 9.5) resulting from the convergence and subduction of the Nazca plate beneath the South American plate. These megathrust earthquakes exhibit long rupture regions reaching several hundreds of km with fault displacements of several tens of meters. Minimum delay characterization of these giant events to establish their rupture extent and slip distribution is of the utmost importance for rapid estimations of the shaking area and their corresponding tsunami-genic potential evaluation, particularly when there are only few minutes to warn the coastal population for immediate actions. The task of a rapid evaluation of large earthquakes is accomplished in Chile through a network of sensors being implemented by the National Seismological Center of the University of Chile. The network is mainly composed approximately by one hundred broad-band and strong motion instruments and 130 GNSS devices; all will be connected in real time. Forty units present an optional RTX capability, where satellite orbits and clock corrections are sent to the field device producing a 1-Hz stream at 4-cm level. Tests are being conducted to stream the real-time raw data to be later processed at the central facility. Hypocentral locations and magnitudes are estimated after few minutes by automatic processing software based on wave arrival; for magnitudes less than 7.0 the rapid estimation works within acceptable bounds. For larger events, we are currently developing automatic detectors and amplitude estimators of displacement coming out from the real time GNSS streams. This software has been tested for several cases showing that, for plate interface events, the minimum magnitude threshold detectability reaches values within 6.2 and 6.5 (1-2 cm coastal displacement), providing an excellent tool for earthquake early characterization from a tsunamigenic perspective.

  1. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    PubMed

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  2. Landscape scale prediction of earthquake-induced landsliding based on seismological and geomorphological parameters.

    NASA Astrophysics Data System (ADS)

    Marc, O.; Hovius, N.; Meunier, P.; Rault, C.

    2017-12-01

    In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and

  3. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  4. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  5. Lake deposits record evidence of large post-1505 AD earthquakes in western Nepal

    NASA Astrophysics Data System (ADS)

    Ghazoui, Z.; Bertrand, S.; Vanneste, K.; Yokoyama, Y.; Van Der Beek, P.; Nomade, J.; Gajurel, A.

    2016-12-01

    According to historical records, the last large earthquake that ruptured the Main Frontal Thrust (MFT) in western Nepal occurred in 1505 AD. Since then, no evidence of other large earthquakes has been found in historical records or geological archives. In view of the catastrophic consequences to millions of inhabitants of Nepal and northern India, intense efforts currently focus on improving our understanding of past earthquake activity and complement the historical data on Himalayan earthquakes. Here we report a new record, based on earthquake-triggered turbidites in lakes. We use lake sediment records from Lake Rara, western Nepal, to reconstruct the occurrence of seismic events. The sediment cores were studied using a multi-proxy approach combining radiocarbon and 210Pb chronologies, physical properties (X-ray computerized axial tomography scan, Geotek multi-sensor core logger), high-resolution grain size, inorganic geochemistry (major elements by ITRAX XRF core scanning) and bulk organic geochemistry (C, N concentrations and stable isotopes). We identified several sequences of dense and layered fine sand mainly composed of mica, which we interpret as earthquake-triggered turbidites. Our results suggest the presence of a synchronous event between the two lake sites correlated with the well-known 1505 AD earthquake. In addition, our sediment records reveal five earthquake-triggered turbidites younger than the 1505 AD event. By comparison with historical archives, we relate one of those to the 1833 AD MFT rupture. The others may reflect successive ruptures of the Western Nepal Fault System. Our study sheds light on events that have not been recorded in historical chronicles. Those five MMI>7 earthquakes permit addressing the problem of missing slip on the MFT in western Nepal and reevaluating the risk of a large earthquake affecting western Nepal and North India.

  6. Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing

    2017-12-01

    Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.

  7. The quest for better quality-of-life - learning from large-scale shaking table tests

    NASA Astrophysics Data System (ADS)

    Nakashima, M.; Sato, E.; Nagae, T.; Kunio, F.; Takahito, I.

    2010-12-01

    Earthquake engineering has its origins in the practice of “learning from actual earthquakes and earthquake damages.” That is, we recognize serious problems by witnessing the actual damage to our structures, and then we develop and apply engineering solutions to solve these problems. This tradition in earthquake engineering, i.e., “learning from actual damage,” was an obvious engineering response to earthquakes and arose naturally as a practice in a civil and building engineering discipline that traditionally places more emphasis on experience than do other engineering disciplines. But with the rapid progress of urbanization, as society becomes denser, and as the many components that form our society interact with increasing complexity, the potential damage with which earthquakes threaten the society also increases. In such an era, the approach of ”learning from actual earthquake damages” becomes unacceptably dangerous and expensive. Among the practical alternatives to the old practice is to “learn from quasi-actual earthquake damages.” One tool for experiencing earthquake damages without attendant catastrophe is the large shaking table. E-Defense, the largest one we have, was developed in Japan after the 1995 Hyogoken-Nanbu (Kobe) earthquake. Since its inauguration in 2005, E-Defense has conducted over forty full-scale or large-scale shaking table tests, applied to a variety of structural systems. The tests supply detailed data on actual behavior and collapse of the tested structures, offering the earthquake engineering community opportunities to experience and assess the actual seismic performance of the structures, and to help society prepare for earthquakes. Notably, the data were obtained without having to wait for the aftermaths of actual earthquakes. Earthquake engineering has always been about life safety, but in recent years maintaining the quality of life has also become a critical issue. Quality-of-life concerns include nonstructural

  8. Observational constraints on earthquake source scaling: Understanding the limits in resolution

    USGS Publications Warehouse

    Hough, S.E.

    1996-01-01

    I examine the resolution of the type of stress drop estimates that have been used to place observational constraints on the scaling of earthquake source processes. I first show that apparent stress and Brune stress drop are equivalent to within a constant given any source spectral decay between ??1.5 and ??3 (i.e., any plausible value) and so consistent scaling is expected for the two estimates. I then discuss the resolution and scaling of Brune stress drop estimates, in the context of empirical Green's function results from recent earthquake sequences, including the 1992 Joshua Tree, California, mainshock and its aftershocks. I show that no definitive scaling of stress drop with moment is revealed over the moment range 1019-1025; within this sequence, however, there is a tendency for moderate-sized (M 4-5) events to be characterized by high stress drops. However, well-resolved results for recent M > 6 events are inconsistent with any extrapolated stress increase with moment for the aftershocks. Focusing on comer frequency estimates for smaller (M < 3.5) events, I show that resolution is extremely limited even after empirical Green's function deconvolutions. A fundamental limitation to resolution is the paucity of good signal-to-noise at frequencies above 60 Hz, a limitation that will affect nearly all surficial recordings of ground motion in California and many other regions. Thus, while the best available observational results support a constant stress drop for moderate-to large-sized events, very little robust observational evidence exists to constrain the quantities that bear most critically on our understanding of source processes: stress drop values and stress drop scaling for small events.

  9. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  10. Remotely Triggered Earthquakes Recorded by EarthScope's Transportable Array and Regional Seismic Networks: A Case Study Of Four Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Cerda, I.; Linville, L.; Kilb, D. L.; Pankow, K. L.

    2013-05-01

    Changes in field stress required to trigger earthquakes have been classified in two basic ways: static and dynamic triggering. Static triggering occurs when an earthquake that releases accumulated strain along a fault stress loads a nearby fault. Dynamic triggering occurs when an earthquake is induced by the passing of seismic waves from a large mainshock located at least two or more fault lengths from the epicenter of the main shock. We investigate details of dynamic triggering using data collected from EarthScope's USArray and regional seismic networks located in the United States. Triggered events are identified using an optimized automated detector based on the ratio of short term to long term average (Antelope software). Following the automated processing, the flagged waveforms are individually analyzed, in both the time and frequency domains, to determine if the increased detection rates correspond to local earthquakes (i.e., potentially remotely triggered aftershocks). Here, we show results using this automated schema applied to data from four large, but characteristically different, earthquakes -- Chile (Mw 8.8 2010), Tokoku-Oki (Mw 9.0 2011), Baja California (Mw 7.2 2010) and Wells Nevada (Mw 6.0 2008). For each of our four mainshocks, the number of detections within the 10 hour time windows span a large range (1 to over 200) and statistically >20% of the waveforms show evidence of anomalous signals following the mainshock. The results will help provide for a better understanding of the physical mechanisms involved in dynamic earthquake triggering and will help identify zones in the continental U.S. that may be more susceptible to dynamic earthquake triggering.

  11. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  12. Laboratory generated M -6 earthquakes

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  13. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture

  14. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  15. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  16. Insights in Low Frequency Earthquake Source Processes from Observations of Their Size-Duration Scaling

    NASA Astrophysics Data System (ADS)

    Farge, G.; Shapiro, N.; Frank, W.; Mercury, N.; Vilotte, J. P.

    2017-12-01

    Low frequency earthquakes (LFE) are detected in association with volcanic and tectonic tremor signals as impulsive, repeated, low frequency (1-5 Hz) events originating from localized sources. While the mechanism causing this depletion of the high frequency content of their signal is still unknown, this feature may indicate that the source processes at the origin of LFE are different from those for regular earthquakes. Tectonic LFE are often associated with slip instabilities in the brittle-ductile transition zones of active faults and volcanic LFE with fluid transport in magmatic and hydrothermal systems. Key constraints on the LFE-generating physical mechanisms can be obtained by establishing scaling laws between their sizes and durations. We apply a simple spectral analysis method to the S-waveforms of each LFE to retrieve its seismic moment and corner frequency. The former characterizes the earthquake's size while the latter is inversely proportional to its duration. First, we analyze a selection of tectonic LFE from the Mexican "Sweet Spot" (Guerrero, Mexico). We find characteristic values of M ˜ 1013 N.m (Mw ˜ 2.6) and fc ˜ 2 Hz. The moment-corner frequency distribution compared to values reported in previous studies in tectonic contexts is consistent with the scaling law suggested by Bostock et al. (2015): fc ˜ M-1/10 . We then apply the same source- parameters determination method to deep volcanic LFE detected in the Klyuchevskoy volcanic group in Kamtchatka, Russia. While the seismic moments for these earthquakes are slightly smaller, they still approximately follow the fc ˜ M-1/10 scaling. This size-duration scaling observed for LFE is very different from the one established for regular earthquakes (fc ˜ M-1/3) and from the scaling more recently suggested by Ide et al. (2007) for the broad class of "slow earthquakes". The scaling observed for LFE suggests that they are generated by sources of nearly constant size with strongly varying intensities

  17. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  18. Simulating Earthquakes for Science and Society: New Earthquake Visualizations Ideal for Use in Science Communication

    NASA Astrophysics Data System (ADS)

    de Groot, R. M.; Benthien, M. L.

    2006-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  19. Earthquakes in the Laboratory: Continuum-Granular Interactions

    NASA Astrophysics Data System (ADS)

    Ecke, Robert; Geller, Drew; Ward, Carl; Backhaus, Scott

    2013-03-01

    Earthquakes in nature feature large tectonic plate motion at large scales of 10-100 km and local properties of the earth on the scale of the rupture width, of the order of meters. Fault gouge often fills the gap between the large slipping plates and may play an important role in the nature and dynamics of earthquake events. We have constructed a laboratory scale experiment that represents a similitude scale model of this general earthquake description. Two photo-elastic plates (50 cm x 25 cm x 1 cm) confine approximately 3000 bi-disperse nylon rods (diameters 0.12 and 0.16 cm, height 1 cm) in a gap of approximately 1 cm. The plates are held rigidly along their outer edges with one held fixed while the other edge is driven at constant speed over a range of about 5 cm. The local stresses exerted on the plates are measured using their photo-elastic response, the local relative motions of the plates, i.e., the local strains, are determined by the relative motion of small ball bearings attached to the top surface, and the configurations of the nylon rods are investigated using particle tracking tools. We find that this system has properties similar to real earthquakes and are exploring these ``lab-quake'' events with the quantitative tools we have developed.

  20. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer

  1. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change |Δ| 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  2. Relationship between large slip area and static stress drop of aftershocks of inland earthquake :Example of the 2007 Noto Hanto earthquake

    NASA Astrophysics Data System (ADS)

    Urano, S.; Hiramatsu, Y.; Yamada, T.

    2013-12-01

    The 2007 Noto Hanto earthquake (MJMA 6.9; hereafter referred to the main shock) occurred at 0:41(UTC) on March 25, 2007 at a depth of 11km beneath the west coast of Noto Peninsula, central Japan. The dominant slip of the main shock was on a reverse fault with a right-lateral slip and the large slip area was distributed from hypocenter to the shallow part on the fault plane (Horikawa, 2008). The aftershocks are distributed not only in the small slip area but also in the large slip area (Hiramatsu et al., 2011). In this study, we estimate static stress drops of aftershocks on the fault plane of the main shock. We discuss the relationship between the static stress drops of the aftershocks and the large slip area of the main shock by investigating spatial pattern of the values of the static stress drops. We use the waveform data obtained by the group for the joint aftershock observations of the 2007 Noto Hanto Earthquake (Sakai et al., 2007). The sampling frequency of the waveform data is 100 Hz or 200 Hz. Focusing on similar aftershocks reported by Hiramatsu et al. (2011), we analyze static stress drops by using the method of empirical Green's function (EGF) (Hough, 1997) as follows. The smallest earthquake (MJMA≥2.0) of each group of similar earthquakes is set to the EGF earthquake, and the largest earthquake (MJMA≥2.5) is set to the target earthquake. We then deconvolve the waveform of an interested earthquake with that of the EGF earthquake at each station and obtain the spectral ratio of the sources that cancels the propagation effects (path and site effects). Following the procedure of Yamada et al. (2010), we finally estimate static stress drops for P- and S-waves from corner frequencies of the spectral ratio by using a model of Madariaga (1976). The estimated average value of static stress drop is 8.2×1.3 MPa (8.6×2.2 MPa for P-wave and 7.8×1.3 MPa for S-wave). These values are coincident approximately with the static stress drop of aftershocks of other

  3. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, Peter J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  4. Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, Tom

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  5. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    NASA Astrophysics Data System (ADS)

    Parsons, Tom

    2002-09-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ˜7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  6. The Quanzhou large earthquake: environment impact and deep process

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Gao*, R.; Ye, Z.; Wang, C.

    2017-12-01

    The Quanzhou earthquake is the largest earthquake in China's southeast coast in history. The ancient city of Quanzhou and its adjacent areas suffered serious damage. Analysis of the impact of Quanzhou earthquake on human activities, ecological environment and social development will provide an example for the research on environment and human interaction.According to historical records, on the night of December 29, 1604, a Ms 8.0 earthquake occurred in the sea area at the east of Quanzhou (25.0°N, 119.5°E) with a focal depth of 25 kilometers. It affected to a maximum distance of 220 kilometers from the epicenter and caused serious damage. Quanzhou, which has been known as one of the world's largest trade ports during Song and Yuan periods was heavily destroyed by this earthquake. The destruction of the ancient city was very serious and widespread. The city wall collapsed in Putian, Nanan, Tongan and other places. The East and West Towers of Kaiyuan Temple, which are famous with magnificent architecture in history, were seriously destroyed.Therefore, an enormous earthquake can exert devastating effects on human activities and social development in the history. It is estimated that a more than Ms. 5.0 earthquake in the economically developed coastal areas in China can directly cause economic losses for more than one hundred million yuan. This devastating large earthquake that severely destroyed the Quanzhou city was triggered under a tectonic-extensional circumstance. In this coastal area of the Fujian Province, the crust gradually thins eastward from inland to coast (less than 29 km thick crust beneath the coast), the lithosphere is also rather thin (60 70 km), and the Poisson's ratio of the crust here appears relatively high. The historical Quanzhou Earthquake was probably correlated with the NE-striking Littoral Fault Zone, which is characterized by right-lateral slip and exhibiting the most active seismicity in the coastal area of Fujian. Meanwhile, tectonic

  7. Global Instrumental Seismic Catalog: earthquake relocations for 1900-present

    NASA Astrophysics Data System (ADS)

    Villasenor, A.; Engdahl, E.; Storchak, D. A.; Bondar, I.

    2010-12-01

    We present the current status of our efforts to produce a set of homogeneous earthquake locations and improved focal depths towards the compilation of a Global Catalog of instrumentally recorded earthquakes that will be complete down to the lowest magnitude threshold possible on a global scale and for the time period considered. This project is currently being carried out under the auspices of GEM (Global Earthquake Model). The resulting earthquake catalog will be a fundamental dataset not only for earthquake risk modeling and assessment on a global scale, but also for a large number of studies such as global and regional seismotectonics; the rupture zones and return time of large, damaging earthquakes; the spatial-temporal pattern of moment release along seismic zones and faults etc. Our current goal is to re-locate all earthquakes with available station arrival data using the following magnitude thresholds: M5.5 for 1964-present, M6.25 for 1918-1963, M7.5 (complemented with significant events in continental regions) for 1900-1917. Phase arrival time data for earthquakes after 1963 are available in digital form from the International Seismological Centre (ISC). For earthquakes in the time period 1918-1963, phase data is obtained by scanning the printed International Seismological Summary (ISS) bulletins and applying optical character recognition routines. For earlier earthquakes we will collect phase data from individual station bulletins. We will illustrate some of the most significant results of this relocation effort, including aftershock distributions for large earthquakes, systematic differences in epicenter and depth with respect to previous location, examples of grossly mislocated events, etc.

  8. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  9. Foreshock patterns preceding large earthquakes in the subduction zone of Chile

    NASA Astrophysics Data System (ADS)

    Minadakis, George; Papadopoulos, Gerassimos A.

    2016-04-01

    Some of the largest earthquakes in the globe occur in the subduction zone of Chile. Therefore, it is of particular interest to investigate foreshock patterns preceding such earthquakes. Foreshocks in Chile were recognized as early as 1960. In fact, the giant (Mw9.5) earthquake of 22 May 1960, which was the largest ever instrumentally recorded, was preceded by 45 foreshocks in a time period of 33h before the mainshock, while 250 aftershocks were recorded in a 33h time period after the mainshock. Four foreshocks were bigger than magnitude 7.0, including a magnitude 7.9 on May 21 that caused severe damage in the Concepcion area. More recently, Brodsky and Lay (2014) and Bedford et al. (2015) reported on foreshock activity before the 1 April 2014 large earthquake (Mw8.2). However, 3-D foreshock patterns in space, time and size were not studied in depth so far. Since such studies require for good seismic catalogues to be available, we have investigated 3-D foreshock patterns only before the recent, very large mainshocks occurring on 27 February 2010 (Mw 8.8), 1 April 2014 (Mw8.2) and 16 September 2015 (Mw8.4). Although our analysis does not depend on a priori definition of short-term foreshocks, our interest focuses in the short-term time frame, that is in the last 5-6 months before the mainshock. The analysis of the 2014 event showed an excellent foreshock sequence consisting by an early-weak foreshock stage lasting for about 1.8 months and by a main-strong precursory foreshock stage that was evolved in the last 18 days before the mainshock. During the strong foreshock period the seismicity concentrated around the mainshock epicenter in a critical area of about 65 km mainly along the trench domain to the south of the mainshock epicenter. At the same time, the activity rate increased dramatically, the b-value dropped and the mean magnitude increased significantly, while the level of seismic energy released also increased. In view of these highly significant seismicity

  10. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    NASA Astrophysics Data System (ADS)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  11. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in

  12. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  13. S-net project: Construction of large scale seafloor observatory network for tsunamis and earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Kanazawa, T.; Uehira, K.; Shimbo, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Shinohara, M.; Yamada, T.

    2016-12-01

    National Research Institute for Earth Science and Disaster Resilience ( NIED ) has launched the project of constructing an observatory network for tsunamis and earthquakes on the seafloor. The observatory network was named "S-net, Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench". The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,700 km. The S-net system extends along Kuril and Japan trenches around Japan islands from north to south covering the area between southeast off island of Hokkaido and off the Boso Peninsula, Chiba Prefecture. The project has been financially supported by MEXT Japan. An observatory package is 34cm in diameter and 226cm long. Each observatory equips two units of a high sensitive water-depth sensor as a tsunami meter and four sets of three-component seismometers. The water-depth sensor has measurement resolution of sub-centimeter level. Combination of multiple seismometers secures wide dynamic range and robustness of the observation that are needed for early earthquake warning. The S-net is composed of six segment networks that consists of about 25 observatories and 800-1,600km length submarine optical cable. Five of six segment networks except the one covering the outer rise area of the Japan Trench has been already installed. The data from the observatories on those five segment networks are being transferred to the data center at NIED on a real-time basis, and then verification of data integrity are being carried out at the present moment. Installation of the last segment network of the S-net, that is, the outer rise one is scheduled to be finished within FY2016. Full-scale operation of the S-net will start at FY2017. We will report construction and operation of the S-net submarine cable system as well as the outline of the obtained data in this presentation.

  14. What Googling Trends Tell Us About Public Interest in Earthquakes

    NASA Astrophysics Data System (ADS)

    Tan, Y. J.; Maharjan, R.

    2017-12-01

    Previous studies have shown that immediately after large earthquakes, there is a period of increased public interest. This represents a window of opportunity for science communication and disaster relief fundraising efforts to reach more people. However, how public interest varies for different earthquakes has not been quantified systematically on a global scale. We analyze how global search interest for the term "earthquake" on Google varies following earthquakes of magnitude ≥ 5.5 from 2004 to 2016. We find that there is a spike in search interest after large earthquakes followed by an exponential temporal decay. Preliminary results suggest that the period of increased search interest scales with death toll and correlates with the period of increased media coverage. This suggests that the relationship between the period of increased public interest in earthquakes and death toll might be an effect of differences in media coverage. However, public interest never remains elevated for more than three weeks. Therefore, to take advantage of this short period of increased public interest, science communication and disaster relief fundraising efforts have to act promptly following devastating earthquakes.

  15. Why and Where do Large Shallow Slab Earthquakes Occur?

    NASA Astrophysics Data System (ADS)

    Seno, T.; Yoshida, M.

    2001-12-01

    Within a shallow portion (20-60 km depth) of subducting slabs, it has been believed that large earthquakes seldom occur because the differential stress is generally expected to be low between bending at the trench-outer rise and unbending at the intermediate-depth. However, there are several regions in which large ( M>=7.0 ) earthquakes, including three events early in this year, have occurred in this portion. Searching such events from published individual studies and Harvard University centroid moment tensor catalogue, we find nineteen events in eastern Hokkaido, Kyushu-SW Japan, Mariana, Manila, Sumatra, Vanuatu, Chile, Peru, El Salvador, Mexico, and Cascadia. Slab stresses revealed from the mechanism solutions of those large events and smaller events are tensional in a slab dip direction. However, ages of the subducting oceanic plates are generally young, which denies a possibility that the slab pull works as a cause. Except for Manila and Sumatra, the stresses in the overriding plates are characterized by the change in {σ }Hmax direction from arc-parallel in the back-arc to arc-perpendicular in the fore-arc, which implies that a horizontal stress gradient exists in the across-arc direction. Peru and Chile, where the back-arc is compressional, can be categorized into this type, because a horizontal stress gradient exists over the continent from tension in east to compression in the west. In these regions, it is expected that mantle drag forces are operating beneath the upper plates, which drive the upper plates to the trenchward overriding the subducting oceanic plates. Assuming that the mantle drag forces beneath the upper plates originate from the mantle convection currents or upwelling plumes, we infer that the upper plates driven by the convection suck the oceanic plates, making the shallow portion of the slabs in extra-tension, thus resulting in the large shallow slab earthquakes in this tectonic regime.

  16. A systematic investigation into b values prior to coming large earthquakes

    NASA Astrophysics Data System (ADS)

    Nanjo, K.; Yoshida, A.

    2017-12-01

    The Gutenberg-Richter law for frequency-magnitude distribution of earthquakes is now well established in seismology. The b value, the slope of the distribution, is supposed to reflect heterogeneity of seismogenic region (e.g. Mogi 1962) and development of interplate coupling in subduction zone (e.g. Nanjo et al., 2012; Tormann et al. 2015). In the laboratory as well as in the Earth's crust, the b value is known to be inversely dependent on differential stresses (Scholz 1968, 2015). In this context, the b value could serve as a stress meter to help locate asperities, the highly-stressed patches, in fault planes where large rupture energy is released (e.g. Schorlemmer & Wiemer 2005). However, it still remains uncertain whether the b values of events prior to coming large earthquakes are always low significantly. To clarify this issue, we conducted a systematic investigation into b values prior to large earthquakes in the Japanese Mainland. Since no physical definition of mainshock, foreshock, and aftershock is known, we simply investigated b values of the events with magnitudes larger than the lower-cutoff magnitude, Mc, prior to earthquakes equal to or larger than a threshold magnitude, Mth, where Mth>Mc. Schorlemmer et al. (2005) showed that the b value for different fault types differs significantly, which is supposed to reflect the feature that the fracture stress depends on fault types. Therefore, we classified fault motions into normal, strike-slip, and thrust types based on the mechanism solution of earthquakes, and computed b values of events associated with each fault motion separately. We found that the target events (M≥Mth) and the events that occurred prior to the target events both show a common systematic change in b: normal faulting events have the highest b values, thrust events the lowest and strike-slip events intermediate values. Moreover, we found that the b values for the prior events (M≥Mc) are significantly lower than the b values for the

  17. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    USGS Publications Warehouse

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  18. Evidence for earthquake triggering of large landslides in coastal Oregon, USA

    USGS Publications Warehouse

    Schulz, W.H.; Galloway, S.L.; Higgins, J.D.

    2012-01-01

    Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement rates, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great earthquakes every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or earthquake-induced ground motion using field observations and modeling of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement rates suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia earthquake was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support earthquake ground motion as their triggering mechanism. Limit-equilibrium slope-stability modeling suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. Modeling suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone earthquake would trigger formation of the rockslides. Displacement modeling following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff

  19. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    NASA Astrophysics Data System (ADS)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  20. The Richter scale: its development and use for determining earthquake source parameters

    USGS Publications Warehouse

    Boore, D.M.

    1989-01-01

    The ML scale, introduced by Richter in 1935, is the antecedent of every magnitude scale in use today. The scale is defined such that a magnitude-3 earthquake recorded on a Wood-Anderson torsion seismometer at a distance of 100 km would write a record with a peak excursion of 1 mm. To be useful, some means are needed to correct recordings to the standard distance of 100 km. Richter provides a table of correction values, which he terms -log Ao, the latest of which is contained in his 1958 textbook. A new analysis of over 9000 readings from almost 1000 earthquakes in the southern California region was recently completed to redetermine the -log Ao values. Although some systematic differences were found between this analysis and Richter's values (such that using Richter's values would lead to underand overestimates of ML at distances less than 40 km and greater than 200 km, respectively), the accuracy of his values is remarkable in view of the small number of data used in their determination. Richter's corrections for the distance attenuation of the peak amplitudes on Wood-Anderson seismographs apply only to the southern California region, of course, and should not be used in other areas without first checking to make sure that they are applicable. Often in the past this has not been done, but recently a number of papers have been published determining the corrections for other areas. If there are significant differences in the attenuation within 100 km between regions, then the definition of the magnitude at 100 km could lead to difficulty in comparing the sizes of earthquakes in various parts of the world. To alleviate this, it is proposed that the scale be defined such that a magnitude 3 corresponds to 10 mm of motion at 17 km. This is consistent both with Richter's definition of ML at 100 km and with the newly determined distance corrections in the southern California region. Aside from the obvious (and original) use as a means of cataloguing earthquakes according

  1. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  2. Seismicity in the source areas of the 1896 and 1933 Sanriku earthquakes and implications for large near-trench earthquake faults

    NASA Astrophysics Data System (ADS)

    Obana, Koichiro; Nakamura, Yasuyuki; Fujie, Gou; Kodaira, Shuichi; Kaiho, Yuka; Yamamoto, Yojiro; Miura, Seiichi

    2018-03-01

    characterized by an aseismic region landward of the trench axis. Spatial heterogeneity of seismicity and crustal structure might indicate the near-trench faults that could lead to future hazardous events such as the 1896 and 1933 Sanriku earthquakes, and should be taken into account in assessment of tsunami hazards related to large near-trench earthquakes.

  3. Spatiotemporal seismic velocity change in the Earth's subsurface associated with large earthquake: contribution of strong ground motion and crustal deformation

    NASA Astrophysics Data System (ADS)

    Sawazaki, K.

    2016-12-01

    It is well known that seismic velocity of the subsurface medium changes after a large earthquake. The cause of the velocity change is roughly attributed to strong ground motion (dynamic strain change), crustal deformation (static strain change), and fracturing around the fault zone. Several studies have revealed that the velocity reduction down to several percent concentrates at the depths shallower than several hundred meters. The amount of velocity reduction correlates well with the intensity of strong ground motion, which indicates that the strong motion is the primary cause of the velocity reduction. Although some studies have proposed contributions of coseismic static strain change and fracturing around fault zone to the velocity change, separation of their contributions from the site-related velocity change is usually difficult. Velocity recovery after a large earthquake is also widely observed. The recovery process is generally proportional to logarithm of the lapse time, which is similar to the behavior of "slow dynamics" recognized in laboratory experiments. The time scale of the recovery is usually months to years in field observations, while it is several hours in laboratory experiments. Although the factor that controls the recovery speed is not well understood, cumulative strain change due to post-seismic deformation, migration of underground water, mechanical and chemical reactions on the crack surface could be the candidate. In this study, I summarize several observations that revealed spatiotemporal distribution of seismic velocity change due to large earthquakes; especially I focus on the case of the M9.0 2011 Tohoku earthquake. Combining seismograms of Hi-net (high-sensitivity) and KiK-net (strong motion), geodetic records of GEONET and the seafloor GPS/Acoustic ranging, I investigate contribution of the strong ground motion and crustal deformation to the velocity change associated with the Tohoku earthquake, and propose a gross view of

  4. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  5. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  6. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  7. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  8. Principles for selecting earthquake motions in engineering design of large dams

    USGS Publications Warehouse

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  9. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  10. Unusually large earthquakes inferred from tsunami deposits along the Kuril trench

    USGS Publications Warehouse

    Nanayama, F.; Satake, K.; Furukawa, R.; Shimokawa, K.; Atwater, B.F.; Shigeno, K.; Yamaki, S.

    2003-01-01

    The Pacific plate converges with northeastern Eurasia at a rate of 8-9 m per century along the Kamchatka, Kuril and Japan trenches. Along the southern Kuril trench, which faces the Japanese island of Hokkaido, this fast subduction has recurrently generated earthquakes with magnitudes of up to ???8 over the past two centuries. These historical events, on rupture segments 100-200 km long, have been considered characteristic of Hokkaido's plate-boundary earthquakes. But here we use deposits of prehistoric tsunamis to infer the infrequent occurrence of larger earthquakes generated from longer ruptures. Many of these tsunami deposits form sheets of sand that extend kilometres inland from the deposits of historical tsunamis. Stratigraphic series of extensive sand sheets, intercalated with dated volcanic-ash layers, show that such unusually large tsunamis occurred about every 500 years on average over the past 2,000-7,000 years, most recently ???350 years ago. Numerical simulations of these tsunamis are best explained by earthquakes that individually rupture multiple segments along the southern Kuril trench. We infer that such multi-segment earthquakes persistently recur among a larger number of single-segment events.

  11. Infrasound associated with 2004-2005 large Sumatra earthquakes and tsunami

    NASA Astrophysics Data System (ADS)

    Le Pichon, A.; Herry, P.; Mialle, P.; Vergoz, J.; Brachet, N.; Garcés, M.; Drob, D.; Ceranna, L.

    2005-10-01

    Large earthquakes that occurred in the Sumatra region in 2004 and 2005 generated acoustic waves recorded by the Diego Garcia infrasound array. The Progressive Multi-Channel Correlation (PMCC) analysis is performed to detect the seismic and infrasound signals associated with these events. The study is completed by an inverse location procedure that permitted reconstruction of the source location of the infrasonic waves. The results show that ground motion near the epicenter and vibrations of nearby land masses efficiently produced infrasound. The analysis also reveals unique evidence of long period pressure waves from the tsunami earthquake (M9.0) of December 26, 2004.

  12. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  13. Seismic Strong Motion Array Project (SSMAP) to Record Future Large Earthquakes in the Nicoya Peninsula area, Costa Rica

    NASA Astrophysics Data System (ADS)

    Simila, G.; Lafromboise, E.; McNally, K.; Quintereo, R.; Segura, J.

    2007-12-01

    The seismic strong motion array project (SSMAP) for the Nicoya Peninsula in northwestern Costa Rica is composed of 10 - 13 sites including Geotech A900/A800 accelerographs (three-component), Ref-Teks (three- component velocity), and Kinemetric Episensors. The main objectives of the array are to: 1) record and locate strong subduction zone mainshocks [and foreshocks, "early aftershocks", and preshocks] in Nicoya Peninsula, at the entrance of the Nicoya Gulf, and in the Papagayo Gulf regions of Costa Rica, and 2) record and locate any moderate to strong upper plate earthquakes triggered by a large subduction zone earthquake in the above regions. Our digital accelerograph array has been deployed as part of our ongoing research on large earthquakes in conjunction with the Earthquake and Volcano Observatory (OVSICORI) at the Universidad Nacional in Costa Rica. The country wide seismographic network has been operating continuously since the 1980's, with the first earthquake bulletin published more than 20 years ago, in 1984. The recording of seismicity and strong motion data for large earthquakes along the Middle America Trench (MAT) has been a major research project priority over these years, and this network spans nearly half the time of a "repeat cycle" (~ 50 years) for large (Ms ~ 7.5- 7.7) earthquakes beneath the Nicoya Peninsula, with the last event in 1950. Our long time co- collaborators include the seismology group OVSICORI, with coordination for this project by Dr. Ronnie Quintero and Mr. Juan Segura. The major goal of our project is to contribute unique scientific information pertaining to a large subduction zone earthquake and its related seismic activity when the next large earthquake occurs in Nicoya. We are now collecting a database of strong motion records for moderate sized events to document this last stage prior to the next large earthquake. A recent event (08/18/06; M=4.3) located 20 km northwest of Samara was recorded by two stations (Playa Carrillo

  14. Fault Branching and Long-Term Earthquake Rupture Scenario for Strike-Slip Earthquake

    NASA Astrophysics Data System (ADS)

    Klinger, Y.; CHOI, J. H.; Vallage, A.

    2017-12-01

    Careful examination of surface rupture for large continental strike-slip earthquakes reveals that for the majority of earthquakes, at least one major branch is involved in the rupture pattern. Often, branching might be either related to the location of the epicenter or located toward the end of the rupture, and possibly related to the stopping of the rupture. In this work, we examine large continental earthquakes that show significant branches at different scales and for which ground surface rupture has been mapped in great details. In each case, rupture conditions are described, including dynamic parameters, past earthquakes history, and regional stress orientation, to see if the dynamic stress field would a priori favor branching. In one case we show that rupture propagation and branching are directly impacted by preexisting geological structures. These structures serve as pathways for the rupture attempting to propagate out of its shear plane. At larger scale, we show that in some cases, rupturing a branch might be systematic, hampering possibilities for the development of a larger seismic rupture. Long-term geomorphology hints at the existence of a strong asperity in the zone where the rupture branched off the main fault. There, no evidence of throughgoing rupture could be seen along the main fault, while the branch is well connected to the main fault. This set of observations suggests that for specific configurations, some rupture scenarios involving systematic branching are more likely than others.

  15. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the

  16. Detection of large prehistoric earthquakes in the pacific northwest by microfossil analysis.

    PubMed

    Mathewes, R W; Clague, J J

    1994-04-29

    Geologic and palynological evidence for rapid sea level change approximately 3400 and approximately 2000 carbon-14 years ago (3600 and 1900 calendar years ago) has been found at sites up to 110 kilometers apart in southwestern British Columbia. Submergence on southern Vancouver Island and slight emergence on the mainland during the older event are consistent with a great (magnitude M >/= 8) earthquake on the Cascadia subduction zone. The younger event is characterized by submergence throughout the region and may also record a plate-boundary earthquake or a very large crustal or intraplate earthquake. Microfossil analysis can detect small amounts of coseismic uplift and subsidence that leave little or no lithostratigraphic signature.

  17. Using earthquake-triggered landslides as a hillslope-scale shear strength test: Insights into rock strength properties at geomorphically relevant spatial scales in high-relief, tectonically active settings

    NASA Astrophysics Data System (ADS)

    Gallen, Sean; Clark, Marin; Godt, Jonathan; Lowe, Katherine

    2016-04-01

    The material strength of rock is known to be a fundamental property in setting landscape form and geomorphic process rates as it acts to modulate feedbacks between earth surface processes, tectonics, and climate. Despite the long recognition of its importance in landscape evolution, a quantitative understanding of the role of rock strength in affecting geomorphic processes lags our knowledge of the influence of tectonics and climate. This gap stems largely from the fact that it remains challenging to quantify rock strength at the hillslope scale. Rock strength is strongly scale dependent because the number, size, spacing, and aperture of fractures sets the upper limit on rock strength, making it difficult to extrapolate laboratory measurements to landscape-scale interpretations. Here we present a method to determine near-surface rock strength at the hillslope-scale, relying on earthquake-triggered landslides as a regional-scale "shear strength" test. We define near-surface strength as the average strength of rock sample by the landslides, which is typically < 10 m. Based on a Newmark sliding block model, which approximates slope stability during an earthquake assuming a material with frictional and cohesive strength, we developed a coseismic landslide model that is capable of reproducing statistical characteristics of the distribution of earthquake-triggered landslides. We present results from two well-documented case-studies of earthquakes that caused widespread mass-wasting; the 2008 Mw 7.9 Wenchuan Earthquake, Sichuan Province, China and the 1994 Mw. 6.8 Northridge Earthquake, CA, USA. We show how this model can be used to determine near-surface rock strength and reproduce mapped landslide patterns provided the spatial distribution of local hillslope gradient, earthquake peak ground acceleration (PGA), and coseismic landsliding are well constrained. Results suggest that near-surface rock strength in these tectonically active settings is much lower than that

  18. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: Altai-Sayan Region

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Nekrasova, A.

    2017-12-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes, USLE, which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10N(M, L) = A + B·(5 - M) + C·log10L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum credible magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g. peak ground acceleration, PGA, or macro-seismic intensity etc.). After a rigorous testing against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory, etc.). This, USLE based, methodology of seismic hazard and risks assessment is applied to the territory of Altai-Sayan Region, of Russia. The study supported by the Russian Science Foundation Grant No. 15-17-30020.

  19. Rupture process of large earthquakes in the northern Mexico subduction zone

    NASA Astrophysics Data System (ADS)

    Ruff, Larry J.; Miller, Angus D.

    1994-03-01

    The Cocos plate subducts beneath North America at the Mexico trench. The northernmost segment of this trench, between the Orozco and Rivera fracture zones, has ruptured in a sequence of five large earthquakes from 1973 to 1985; the Jan. 30, 1973 Colima event ( M s 7.5) at the northern end of the segment near Rivera fracture zone; the Mar. 14, 1979 Petatlan event ( M s 7.6) at the southern end of the segment on the Orozco fracture zone; the Oct. 25, 1981 Playa Azul event ( M s 7.3) in the middle of the Michoacan “gap”; the Sept. 19, 1985 Michoacan mainshock ( M s 8.1); and the Sept. 21, 1985 Michoacan aftershock ( M s 7.6) that reruptured part of the Petatlan zone. Body wave inversion for the rupture process of these earthquakes finds the best: earthquake depth; focal mechanism; overall source time function; and seismic moment, for each earthquake. In addition, we have determined spatial concentrations of seismic moment release for the Colima earthquake, and the Michoacan mainshock and aftershock. These spatial concentrations of slip are interpreted as asperities; and the resultant asperity distribution for Mexico is compared to other subduction zones. The body wave inversion technique also determines the Moment Tensor Rate Functions; but there is no evidence for statistically significant changes in the moment tensor during rupture for any of the five earthquakes. An appendix describes the Moment Tensor Rate Functions methodology in detail. The systematic bias between global and regional determinations of epicentral locations in Mexico must be resolved to enable plotting of asperities with aftershocks and geographic features. We have spatially “shifted” all of our results to regional determinations of epicenters. The best point source depths for the five earthquakes are all above 30 km, consistent with the idea that the down-dip edge of the seismogenic plate interface in Mexico is shallow compared to other subduction zones. Consideration of uncertainties in

  20. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  1. Disaster Metrics: Evaluation of de Boer's Disaster Severity Scale (DSS) Applied to Earthquakes.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; McCord, Caitlin M; Sherak, Raphael A G; Hsu, Edberdt B; Kelen, Gabor D

    2015-02-01

    Quantitative measurement of the medical severity following multiple-casualty events (MCEs) is an important goal in disaster medicine. In 1990, de Boer proposed a 13-point, 7-parameter scale called the Disaster Severity Scale (DSS). Parameters include cause, duration, radius, number of casualties, nature of injuries, rescue time, and effect on surrounding community. Hypothesis This study aimed to examine the reliability and dimensionality (number of salient themes) of de Boer's DSS scale through its application to 144 discrete earthquake events. A search for earthquake events was conducted via National Oceanic and Atmospheric Administration (NOAA) and US Geological Survey (USGS) databases. Two experts in the field of disaster medicine independently reviewed and assigned scores for parameters that had no data readily available (nature of injuries, rescue time, and effect on surrounding community), and differences were reconciled via consensus. Principle Component Analysis was performed using SPSS Statistics for Windows Version 22.0 (IBM Corp; Armonk, New York USA) to evaluate the reliability and dimensionality of the DSS. A total of 144 individual earthquakes from 2003 through 2013 were identified and scored. Of 13 points possible, the mean score was 6.04, the mode = 5, minimum = 4, maximum = 11, and standard deviation = 2.23. Three parameters in the DSS had zero variance (ie, the parameter received the same score in all 144 earthquakes). Because of the zero contribution to variance, these three parameters (cause, duration, and radius) were removed to run the statistical analysis. Cronbach's alpha score, a coefficient of internal consistency, for the remaining four parameters was found to be robust at 0.89. Principle Component Analysis showed uni-dimensional characteristics with only one component having an eigenvalue greater than one at 3.17. The 4-parameter DSS, however, suffered from restriction of scoring range on both parameter and scale levels. Jan de Boer

  2. Reactivity of seismicity rate to static Coulomb stress changes of two consecutive large earthquakes in the central Philippines

    NASA Astrophysics Data System (ADS)

    Dianala, J. D. B.; Aurelio, M.; Rimando, J. M.; Taguibao, K.

    2015-12-01

    In a region with little understanding in terms of active faults and seismicity, two large-magnitude reverse-fault related earthquakes occurred within 100km of each other in separate islands of the Central Philippines—the Mw=6.7 February 2012 Negros earthquake and the Mw=7.2 October 2013 Bohol earthquake. Based on source faults that were defined using onshore, offshore seismic reflection, and seismicity data, stress transfer models for both earthquakes were calculated using the software Coulomb. Coulomb stress triggering between the two main shocks is unlikely as the stress change caused by Negros earthquake on the Bohol fault was -0.03 bars. Correlating the stress changes on optimally-oriented reverse faults with seismicity rate changes shows that areas that decreased both in static stress and seismicity rate after the first earthquake were then areas with increased static stress and increased seismicity rate caused by the second earthquake. These areas with now increased stress, especially those with seismicity showing reactivity to static stress changes caused by the two earthquakes, indicate the presence of active structures in the island of Cebu. Comparing the history of instrumentally recorded seismicity and the recent large earthquakes of Negros and Bohol, these structures in Cebu have the potential to generate large earthquakes. Given that the Philippines' second largest metropolitan area (Metro Cebu) is in close proximity, detailed analysis of the earthquake potential and seismic hazards in these areas should be undertaken.

  3. Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake

    PubMed Central

    Tsuji, Takeshi; Ishibashi, Jun’ichiro; Ishitsuka, Kazuya; Kamata, Ryuichi

    2017-01-01

    We report horizontal sliding of the kilometre-scale geologic block under the Aso hot springs (Uchinomaki area) caused by vibrations from the 2016 Kumamoto earthquake (Mw 7.0). Direct borehole observations demonstrate the sliding along the horizontal geological formation at ~50 m depth, which is where the shallowest hydrothermal reservoir developed. Owing to >1 m northwest movement of the geologic block, as shown by differential interferometric synthetic aperture radar (DInSAR), extensional open fissures were generated at the southeastern edge of the horizontal sliding block, and compressional deformation and spontaneous fluid emission from wells were observed at the northwestern edge of the block. The temporal and spatial variation of the hot spring supply during the earthquake can be explained by the horizontal sliding and borehole failures. Because there was no strain accumulation around the hot spring area prior to the earthquake and gravitational instability could be ignored, the horizontal sliding along the low-frictional formation was likely caused by seismic forces from the remote earthquake. The insights derived from our field-scale observations may assist further research into geologic block sliding in horizontal geological formations. PMID:28218298

  4. Earthquake hazard and risk assessment based on Unified Scaling Law for Earthquakes: Greater Caucasus and Crimea

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.; Nekrasova, Anastasia K.

    2018-05-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10 N(M, L) = A + B·(5 - M) + C·log10 L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within a seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g., peak ground acceleration, PGA, or macro-seismic intensity). After a rigorous verification against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory). The methodology of seismic hazard and risk assessment is illustrated by application to the territory of Greater Caucasus and Crimea.

  5. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  6. Aftershocks, earthquake effects, and the location of the large 14 December 1872 earthquake near Entiat, central Washington

    USGS Publications Warehouse

    Brocher, Thomas M.; Hopper, Margaret G.; Algermissen, S.T. Ted; Perkins, David M.; Brockman, Stanley R.; Arnold, Edouard P.

    2017-01-01

    Reported aftershock durations, earthquake effects, and other observations from the large 14 December 1872 earthquake in central Washington are consistent with an epicenter near Entiat, Washington. Aftershocks were reported for more than 3 months only near Entiat. Modal intensity data described in this article are consistent with an Entiat area epicenter, where the largest modified Mercalli intensities, VIII, were assigned between Lake Chelan and Wenatchee. Although ground failures and water effects were widespread, there is a concentration of these features along the Columbia River and its tributaries in the Entiat area. Assuming linear ray paths, misfits from 23 reports of the directions of horizontal shaking have a local minima at Entiat, assuming the reports are describing surface waves, but the region having comparable misfit is large. Broadband seismograms recorded for comparable ray paths provide insight into the reasons why possible S–P times estimated from felt reports at two locations are several seconds too small to be consistent with an Entiat area epicenter.

  7. Underestimation of Microearthquake Size by the Magnitude Scale of the Japan Meteorological Agency: Influence on Earthquake Statistics

    NASA Astrophysics Data System (ADS)

    Uchide, Takahiko; Imanishi, Kazutoshi

    2018-01-01

    Magnitude scales based on the amplitude of seismic waves, including the Japan Meteorological Agency magnitude scale (Mj), are commonly used in routine processes. The moment magnitude scale (Mw), however, is more physics based and is able to evaluate any type and size of earthquake. This paper addresses the relation between Mj and Mw for microearthquakes. The relative moment magnitudes among earthquakes are well constrained by multiple spectral ratio analyses. The results for the events in the Fukushima Hamadori and northern Ibaraki prefecture areas of Japan imply that Mj is significantly and systematically smaller than Mw for microearthquakes. The Mj-Mw curve has slopes of 1/2 and 1 for small and large values of Mj, respectively; for example, Mj = 1.0 corresponds to Mw = 2.0. A simple numerical simulation implies that this is due to anelastic attenuation and the recording using a finite sampling interval. The underestimation affects earthquake statistics. The completeness magnitude, Mc, for magnitudes lower than which the magnitude-frequency distribution deviates from the Gutenberg-Richter law, is effectively lower for Mw than that for Mj, by taking into account the systematic difference between Mj and Mw. The b values of the Gutenberg-Richter law are larger for Mw than for Mj. As the b values for Mj and Mw are well correlated, qualitative argument using b values is not affected. While the estimated b values for Mj are below 1.5, those for Mw often exceed 1.5. This may affect the physical implication of the seismicity.

  8. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the

  9. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas Fault

    USGS Publications Warehouse

    Shelly, David R.

    2010-01-01

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between ~3 and ~6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  10. Effects of the March 1964 Alaska earthquake on glaciers: Chapter D in The Alaska earthquake, March 27, 1964: effects on hydrologic regimen

    USGS Publications Warehouse

    Post, Austin

    1967-01-01

    The 1964 Alaska earthquake occurred in a region where there are many hundreds of glaciers, large and small. Aerial photographic investigations indicate that no snow and ice avalanches of large size occurred on glaciers despite the violent shaking. Rockslide avalanches extended onto the glaciers in many localities, seven very large ones occurring in the Copper River region 160 kilometers east of the epicenter. Some of these avalanches traveled several kilometers at low gradients; compressed air may have provided a lubricating layer. If long-term changes in glaciers due to tectonic changes in altitude and slope occur, they will probably be very small. No evidence of large-scale dynamic response of any glacier to earthquake shaking or avalanche loading was found in either the Chugach or Kenai Mountains 16 months after the 1964 earthquake, nor was there any evidence of surges (rapid advances) as postulated by the Earthquake-Advance Theory of Tarr and Martin.

  11. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of

  12. Seismic Strong Motion Array Project (SSMAP) to Record Future Large Earthquakes in the Nicoya Peninsula area, Costa Rica

    NASA Astrophysics Data System (ADS)

    Simila, G.; McNally, K.; Quintero, R.; Segura, J.

    2006-12-01

    The seismic strong motion array project (SSMAP) for the Nicoya Peninsula in northwestern Costa Rica is composed of 10 13 sites including Geotech A900/A800 accelerographs (three-component), Ref-Teks (three- component velocity), and Kinemetric Episensors. The main objectives of the array are to: 1) record and locate strong subduction zone mainshocks [and foreshocks, "early aftershocks", and preshocks] in Nicoya Peninsula, at the entrance of the Nicoya Gulf, and in the Papagayo Gulf regions of Costa Rica, and 2) record and locate any moderate to strong upper plate earthquakes triggered by a large subduction zone earthquake in the above regions. Our digital accelerograph array has been deployed as part of our ongoing research on large earthquakes in conjunction with the Earthquake and Volcano Observatory (OVSICORI) at the Universidad Nacional in Costa Rica. The country wide seismographic network has been operating continuously since the 1980's, with the first earthquake bulletin published more than 20 years ago, in 1984. The recording of seismicity and strong motion data for large earthquakes along the Middle America Trench (MAT) has been a major research project priority over these years, and this network spans nearly half the time of a "repeat cycle" (50 years) for large (Ms 7.5- 7.7) earthquakes beneath the Nicoya Peninsula, with the last event in 1950. Our long time co-collaborators include the seismology group OVSICORI, with coordination for this project by Dr. Ronnie Quintero and Mr. Juan Segura. Numerous international investigators are also studying this region with GPS and seismic stations (US, Japan, Germany, Switzerland, etc.). Also, there are various strong motion instruments operated by local engineers, for building purposes and mainly concentrated in the population centers of the Central Valley. The major goal of our project is to contribute unique scientific information pertaining to a large subduction zone earthquake and its related seismic activity when

  13. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  14. Repetition of large stress drop earthquakes on Wairarapa fault, New Zealand, revealed by LiDAR data

    NASA Astrophysics Data System (ADS)

    Delor, E.; Manighetti, I.; Garambois, S.; Beaupretre, S.; Vitard, C.

    2013-12-01

    We have acquired high-resolution LiDAR topographic data over most of the onland trace of the 120 km-long Wairarapa strike-slip fault, New Zealand. The Wairarapa fault broke in a large earthquake in 1855, and this historical earthquake is suggested to have produced up to 18 m of lateral slip at the ground surface. This would make this earthquake a remarkable event having produced a stress drop much higher than commonly observed on other earthquakes worldwide. The LiDAR data allowed us examining the ground surface morphology along the fault at < 50 cm resolution, including in the many places covered with vegetation. In doing so, we identified more than 900 alluvial features of various natures and sizes that are clearly laterally offset by the fault. We measured the about 670 clearest lateral offsets, along with their uncertainties. Most offsets are lower than 100 m. Each measurement was weighted by a quality factor that quantifies the confidence level in the correlation of the paired markers. Since the slips are expected to vary along the fault, we analyzed the measurements in short, 3-5 km-long fault segments. The PDF statistical analysis of the cumulative offsets per segment reveals that the alluvial morphology has well recorded, at every step along the fault, no more than a few (3-6), well distinct cumulative slips, all lower than 80 m. Plotted along the entire fault, the statistically defined cumulative slip values document four, fairly continuous slip profiles that we attribute to the four most recent large earthquakes on the Wairarapa fault. The four slip profiles have a roughly triangular and asymmetric envelope shape that is similar to the coseismic slip distributions described for most large earthquakes worldwide. The four slip profiles have their maximum slip at the same place, in the northeastern third of the fault trace. The maximum slips vary from one event to another in the range 7-15 m; the most recent 1855 earthquake produced a maximum coseismic slip

  15. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  16. A large silent earthquake and the future rupture of the Guerrero seismic

    NASA Astrophysics Data System (ADS)

    Kostoglodov, V.; Lowry, A.; Singh, S.; Larson, K.; Santiago, J.; Franco, S.; Bilham, R.

    2003-04-01

    The largest global earthquakes typically occur at subduction zones, at the seismogenic boundary between two colliding tectonic plates. These earthquakes release elastic strains accumulated over many decades of plate motion. Forecasts of these events have large errors resulting from poor knowledge of the seismic cycle. The discovery of slow slip events or "silent earthquakes" in Japan, Alaska, Cascadia and Mexico provides a new glimmer of hope. In these subduction zones, the seismogenic part of the plate interface is loading not steadily as hitherto believed, but incrementally, partitioning the stress buildup with the slow slip events. If slow aseismic slip is limited to the region downdip of the future rupture zone, slip events may increase the stress at the base of the seismogenic region, incrementing it closer to failure. However if some aseismic slip occurs on the future rupture zone, the partitioning may significantly reduce the stress buildup rate (SBR) and delay a future large earthquake. Here we report characteristics of the largest slow earthquake observed to date (Mw 7.5), and its implications for future failure of the Guerrero seismic gap, Mexico. The silent earthquake began in October 2001 and lasted for 6-7 months. Slow slip produced measurable displacements over an area of 550x250 km2. Average slip on the interface was about 10 cm and the equivalent magnitude, Mw, was 7.5. A shallow subhorizontal configuration of the plate interface in Guererro is a controlling factor for the physical conditions favorable for such extensive slow slip. The total coupled zone in Guerrero is 120-170 km wide while the seismogenic, shallowest portion is only 50 km. This future rupture zone may slip contemporaneously with the deeper aseismic sleep, thereby reducing SBR. The slip partitioning between seismogenic and transition coupled zones may diminish SBR up to 50%. These two factors are probably responsible for a long (at least since 1911) quiet on the Guerrero seismic gap

  17. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  18. Methodology to determine the parameters of historical earthquakes in China

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Lin, Guoliang; Zhang, Zhe

    2017-12-01

    China is one of the countries with the longest cultural tradition. Meanwhile, China has been suffering very heavy earthquake disasters; so, there are abundant earthquake recordings. In this paper, we try to sketch out historical earthquake sources and research achievements in China. We will introduce some basic information about the collections of historical earthquake sources, establishing intensity scale and the editions of historical earthquake catalogues. Spatial-temporal and magnitude distributions of historical earthquake are analyzed briefly. Besides traditional methods, we also illustrate a new approach to amend the parameters of historical earthquakes or even identify candidate zones for large historical or palaeo-earthquakes. In the new method, a relationship between instrumentally recorded small earthquakes and strong historical earthquakes is built up. Abundant historical earthquake sources and the achievements of historical earthquake research in China are of valuable cultural heritage in the world.

  19. Characterization of Aftershock Sequences from Large Strike-Slip Earthquakes Along Geometrically Complex Faults

    NASA Astrophysics Data System (ADS)

    Sexton, E.; Thomas, A.; Delbridge, B. G.

    2017-12-01

    Large earthquakes often exhibit complex slip distributions and occur along non-planar fault geometries, resulting in variable stress changes throughout the region of the fault hosting aftershocks. To better discern the role of geometric discontinuities on aftershock sequences, we compare areas of enhanced and reduced Coulomb failure stress and mean stress for systematic differences in the time dependence and productivity of these aftershock sequences. In strike-slip faults, releasing structures, including stepovers and bends, experience an increase in both Coulomb failure stress and mean stress during an earthquake, promoting fluid diffusion into the region and further failure. Conversely, Coulomb failure stress and mean stress decrease in restraining bends and stepovers in strike-slip faults, and fluids diffuse away from these areas, discouraging failure. We examine spatial differences in seismicity patterns along structurally complex strike-slip faults which have hosted large earthquakes, such as the 1992 Mw 7.3 Landers, the 2010 Mw 7.2 El-Mayor Cucapah, the 2014 Mw 6.0 South Napa, and the 2016 Mw 7.0 Kumamoto events. We characterize the behavior of these aftershock sequences with the Epidemic Type Aftershock-Sequence Model (ETAS). In this statistical model, the total occurrence rate of aftershocks induced by an earthquake is λ(t) = λ_0 + \\sum_{i:t_i

  20. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  1. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  2. The AD 365 earthquake: high resolution tsunami inundation for Crete and full scale simulation exercise

    NASA Astrophysics Data System (ADS)

    Kalligeris, N.; Flouri, E.; Okal, E.; Synolakis, C.

    2012-04-01

    In the eastern Mediterranean, historical and archaeological records document major earthquake and tsunami events in the past 2000 year (Ambraseys and Synolakis, 2010). The 1200km long Hellenic Arc has allegedly caused the strongest reported earthquakes and tsunamis in the region. Among them, the AD 365 and AD 1303 tsunamis have been extensively documented. They are likely due to ruptures of the Central and Eastern segments of the Hellenic Arc, respectively. Both events had widespread impact due to ground shaking, and e triggered tsunami waves that reportedly affected the entire eastern Mediterranean. The seismic mechanism of the AD 365 earthquake, located in western Crete, has been recently assigned a magnitude ranging from 8.3 to 8.5 by Shaw et al., (2008), using historical, sedimentological, geomorphic and archaeological evidence. Shaw et al (2008) have inferred that such large earthquakes occur in the Arc every 600 to 800 years, with the last known the AD 1303 event. We report on a full-scale simulation exercise that took place in Crete on 24-25 October 2011, based on a scenario sufficiently large to overwhelm the emergency response capability of Greece and necessitating the invocation of the Monitoring and Information Centre (MIC) of the EU and triggering help from other nations . A repeat of the 365 A.D. earthquake would likely overwhelm the civil defense capacities of Greece. Immediately following the rupture initiation it will cause substantial damage even to well-designed reinforced concrete structures in Crete. Minutes after initiation, the tsunami generated by the rapid displacement of the ocean floor would strike nearby coastal areas, inundating great distances in areas of low topography. The objective of the exercise was to help managers plan search and rescue operations, identify measures useful for inclusion in the coastal resiliency index of Ewing and Synolakis (2011). For the scenario design, the tsunami hazard for the AD 365 event was assessed for

  3. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  4. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  5. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  6. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  7. Surface slip during large Owens Valley earthquakes

    USGS Publications Warehouse

    Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  8. Empirical Scaling Relations of Source Parameters For The Earthquake Swarm 2000 At Novy Kostel (vogtland/nw-bohemia)

    NASA Astrophysics Data System (ADS)

    Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.

    Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.

  9. Constant Stress Drop Fits Earthquake Surface Slip-Length Data

    NASA Astrophysics Data System (ADS)

    Shaw, B. E.

    2011-12-01

    Slip at the surface of the Earth provides a direct window into the earthquake source. A longstanding controversy surrounds the scaling of average surface slip with rupture length, which shows the puzzling feature of continuing to increase with rupture length for lengths many times the seismogenic width. Here we show that a more careful treatment of how ruptures transition from small circular ruptures to large rectangular ruptures combined with an assumption of constant stress drop provides a new scaling law for slip versus length which (1) does an excellent job fitting the data, (2) gives an explanation for the large crossover lengthscale at which slip begins to saturate, and (3) supports constant stress drop scaling which matches that seen for small earthquakes. We additionally discuss how the new scaling can be usefully applied to seismic hazard estimates.

  10. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  11. Nonlinear ionospheric responses to large-amplitude infrasonic-acoustic waves generated by undersea earthquakes

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-02-01

    Numerical models of ionospheric coupling with the neutral atmosphere are used to investigate perturbations of plasma density, vertically integrated total electron content (TEC), neutral velocity, and neutral temperature associated with large-amplitude acoustic waves generated by the initial ocean surface displacements from strong undersea earthquakes. A simplified source model for the 2011 Tohoku earthquake is constructed from estimates of initial ocean surface responses to approximate the vertical motions over realistic spatial and temporal scales. Resulting TEC perturbations from modeling case studies appear consistent with observational data, reproducing pronounced TEC depletions which are shown to be a consequence of the impacts of nonlinear, dissipating acoustic waves. Thermospheric acoustic compressional velocities are ˜±250-300 m/s, superposed with downward flows of similar amplitudes, and temperature perturbations are ˜300 K, while the dominant wave periodicity in the thermosphere is ˜3-4 min. Results capture acoustic wave processes including reflection, onset of resonance, and nonlinear steepening and dissipation—ultimately leading to the formation of ionospheric TEC depletions "holes"—that are consistent with reported observations. Three additional simulations illustrate the dependence of atmospheric acoustic wave and subsequent ionospheric responses on the surface displacement amplitude, which is varied from the Tohoku case study by factors of 1/100, 1/10, and 2. Collectively, results suggest that TEC depletions may only accompany very-large amplitude thermospheric acoustic waves necessary to induce a nonlinear response, here with saturated compressional velocities ˜200-250 m/s generated by sea surface displacements exceeding ˜1 m occurring over a 3 min time period.

  12. Bibliographical search for reliable seismic moments of large earthquakes during 1900-1979 to compute MW in the ISC-GEM Global Instrumental Reference Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Lee, William H. K.; Engdahl, E. Robert

    2015-02-01

    Moment magnitude (MW) determinations from the online GCMT Catalogue of seismic moment tensor solutions (GCMT Catalog, 2011) have provided the bulk of MW values in the ISC-GEM Global Instrumental Reference Earthquake Catalogue (1900-2009) for almost all moderate-to-large earthquakes occurring after 1975. This paper describes an effort to determine MW of large earthquakes that occurred prior to the start of the digital seismograph era, based on credible assessments of thousands of seismic moment (M0) values published in the scientific literature by hundreds of individual authors. MW computed from the published M0 values (for a time period more than twice that of the digital era) are preferable to proxy MW values, especially for earthquakes with MW greater than about 8.5, for which MS is known to be underestimated or "saturated". After examining 1,123 papers, we compile a database of seismic moments and related information for 1,003 earthquakes with published M0 values, of which 967 were included in the ISC-GEM Catalogue. The remaining 36 earthquakes were not included in the Catalogue due to difficulties in their relocation because of inadequate arrival time information. However, 5 of these earthquakes with bibliographic M0 (and thus MW) are included in the Catalogue's Appendix. A search for reliable seismic moments was not successful for earthquakes prior to 1904. For each of the 967 earthquakes a "preferred" seismic moment value (if there is more than one) was selected and its uncertainty was estimated according to the data and method used. We used the IASPEI formula (IASPEI, 2005) to compute direct moment magnitudes (MW[M0]) based on the seismic moments (M0), and assigned their errors based on the uncertainties of M0. From 1900 to 1979, there are 129 great or near great earthquakes (MW ⩾ 7.75) - the bibliographic search provided direct MW values for 86 of these events (or 67%), the GCMT Catalog provided direct MW values for 8 events (or 6%), and the remaining 35

  13. Building Inventory Database on the Urban Scale Using GIS for Earthquake Risk Assessment

    NASA Astrophysics Data System (ADS)

    Kaplan, O.; Avdan, U.; Guney, Y.; Helvaci, C.

    2016-12-01

    The majority of the existing buildings are not safe against earthquakes in most of the developing countries. Before a devastating earthquake, existing buildings need to be assessed and the vulnerable ones must be determined. Determining the seismic performance of existing buildings which is usually made with collecting the attributes of existing buildings, making the analysis and the necessary queries, and producing the result maps is very hard and complicated procedure that can be simplified with Geographic Information System (GIS). The aim of this study is to produce a building inventory database using GIS for assessing the earthquake risk of existing buildings. In this paper, a building inventory database for 310 buildings, located in Eskisehir, Turkey, was produced in order to assess the earthquake risk of the buildings. The results from this study show that 26% of the buildings have high earthquake risk, 33% of the buildings have medium earthquake risk and the 41% of the buildings have low earthquake risk. The produced building inventory database can be very useful especially for governments in dealing with the problem of determining seismically vulnerable buildings in the large existing building stocks. With the help of this kind of methods, determination of the buildings, which may collapse and cause life and property loss during a possible future earthquake, will be very quick, cheap and reliable.

  14. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  15. Preliminary investigation of some large landslides triggered by the 2008 Wenchuan earthquake, Sichuan Province, China

    USGS Publications Warehouse

    Wang, F.; Cheng, Q.; Highland, L.; Miyajima, M.; Wang, Hongfang; Yan, C.

    2009-01-01

    The M s 8.0 Wenchuan earthquake or "Great Sichuan Earthquake" occurred at 14:28 p.m. local time on 12 May 2008 in Sichuan Province, China. Damage by earthquake-induced landslides was an important part of the total earthquake damage. This report presents preliminary observations on the Hongyan Resort slide located southwest of the main epicenter, shallow mountain surface failures in Xuankou village of Yingxiu Town, the Jiufengchun slide near Longmenshan Town, the Hongsong Hydro-power Station slide near Hongbai Town, the Xiaojiaqiao slide in Chaping Town, two landslides in Beichuan County-town which destroyed a large part of the town, and the Donghekou and Shibangou slides in Qingchuan County which formed the second biggest landslide lake formed in this earthquake. The influences of seismic, topographic, geologic, and hydro-geologic conditions are discussed. ?? 2009 Springer-Verlag.

  16. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    NASA Astrophysics Data System (ADS)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  17. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and

  18. Large Earthquake Potential in the Southeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Mora-Paez, H.; Bilham, R. G.; Lafemina, P.; Mattioli, G. S.; Molnar, P. H.; Audemard, F. A.; Perez, O. J.

    2015-12-01

    The axis of rotation describing relative motion of the Caribbean plate with respect to South America lies in Canada near Hudson's Bay, such that the Caribbean plate moves nearly due east relative to South America [DeMets et al. 2010]. The plate motion is absorbed largely by pure strike slip motion along the El Pilar Fault in northeastern Venezuela, but in northwestern Venezuela and northeastern Colombia, the relative motion is distributed over a wide zone that extends from offshore to the northeasterly trending Mérida Andes, with the resolved component of convergence between the Caribbean and South American plates estimated at ~10 mm/yr. Recent densification of GPS networks through COLOVEN and COCONet including access to private GPS data maintained by Colombia and Venezuela allowed the development of a new GPS velocity field. The velocity field, processed with JPL's GOA 6.2, JPL non-fiducial final orbit and clock products and VMF tropospheric products, includes over 120 continuous and campaign stations. This new velocity field along with enhanced seismic reflection profiles, and earthquake location analysis strongly suggest the existence of an active oblique subduction zone. We have also been able to use broadband data from Venezuela to search slow-slip events as an indicator of an active subduction zone. There are caveats to this hypothesis, however, including the absence of volcanism that is typically concurrent with active subduction zones and a weak historical record of great earthquakes. A single tsunami deposit dated at 1500 years before present has been identified on the southeast Yucatan peninsula. Our simulations indicate its probable origin is within our study area. We present a new GPS-derived velocity field, which has been used to improve a regional block model [based on Mora and LaFemina, 2009-2012] and discuss the earthquake and tsunami hazards implied by this model. Based on the new geodetic constraints and our updated block model, if part of the

  19. How large is the fault slip at trench in the M=9 Tohoku-oki earthquake?

    NASA Astrophysics Data System (ADS)

    Wang, Kelin; Sun, Tianhaozhe; Fujiwara, Toshiya; Kodaira, Shuichi; He, Jiangheng

    2015-04-01

    It is widely known that coseismic slip breached the trench during the 2011 Mw=9 Tohoku-oki earthquake, responsible for generating a devastating tsunami. For understanding both the mechanics of megathrust rupture and the mechanism of tsunami generation, it is important to know how much fault slip actually occurred at the trench. But the answer has remained elusive because most of the data from this earthquake do not provide adequate near-trench resolution. Seafloor GPS sites were located > 30 km from the trench. Near-trench seafloor pressure records suffered from complex vertical deformation at local scales. Seismic inversion does not have adequate accuracy at the trench. Inversion of tsunami data is highly dependent on the parameterization of the fault near the trench. The severity of the issue is demonstrated by our compilation of rupture models for this earthquake published by ~40 research groups using multiple sets of coseismic observations. In the peak slip area, fault slip at the trench depicted by these models ranges from zero to >90 m. The faults in many models do not reach the trench because of simplification of fault geometry. In this study, we use high-resolution differential bathymetry, that is, bathymetric differences before and after the earthquake, to constrain coseismic slip at and near the trench along a corridor in the area of largest moment release. We use a 3D elastic finite element model including real fault geometry and surface topography to produce Synthetic Differential Bathymetry (SDB) and compare it with the observed differential bathymetry. Earthquakes induce bathymetric changes by shifting the sloping seafloor seaward and by warping the seafloor through internal deformation of rocks. These effects are simulated by our SDB modeling, except for the permanent formation of the upper plate which is like to be limited and localized. Bathymetry data were collected by JAMSTEC in 1999, 2004, and in 2011 right after the M=9 earthquake. Our SDB

  20. Ring-Shaped Seismicity Structures in Southern California: Possible Preparation for Large Earthquake in the Los Angeles Basin

    NASA Astrophysics Data System (ADS)

    Kopnichev, Yu. F.; Sokolova, I. N.

    2017-12-01

    Some characteristics of seismicity in Southern California are studied. It is found that ring-shaped seismicity structures with threshold magnitudes M th of 4.1, 4.1, and 3.8 formed prior to three large ( M w > 7.0) earthquakes in 1992, 1999, and 2010, respectively. The sizes of these structures are several times smaller than for intracontinental strike-slip events with similar magnitudes. Two ring-shaped structures are identified in areas east of the city of Los Angeles, where relatively large earthquakes have not occurred for at least 150 years. The magnitudes of large events which can occur in the areas of these structures are estimated on the basis of the previously obtained correlation dependence of ring sizes on magnitudes of the strike-slip earthquakes. Large events with magnitudes of M w = 6.9 ± 0.2 and M w = 8.6 ± 0.2 can occur in the area to the east of the city of Los Angeles and in the rupture zone of the 1857 great Fort Tejon earthquake, respectively. We believe that ring-structure formation, similarly to the other regions, is connected with deep-seated fluid migration.

  1. Potential utilization of the NASA/George C. Marshall Space Flight Center in earthquake engineering research

    NASA Technical Reports Server (NTRS)

    Scholl, R. E. (Editor)

    1979-01-01

    Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.

  2. Large Occurrence Patterns of New Zealand Deep Earthquakes: Characterization by Use of a Switching Poisson Model

    NASA Astrophysics Data System (ADS)

    Shaochuan, Lu; Vere-Jones, David

    2011-10-01

    The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.

  3. Rapid Earthquake Magnitude Estimation for Early Warning Applications

    NASA Astrophysics Data System (ADS)

    Goldberg, Dara; Bock, Yehuda; Melgar, Diego

    2017-04-01

    Earthquake magnitude is a concise metric that provides invaluable information about the destructive potential of a seismic event. Rapid estimation of magnitude for earthquake and tsunami early warning purposes requires reliance on near-field instrumentation. For large magnitude events, ground motions can exceed the dynamic range of near-field broadband seismic instrumentation (clipping). Strong motion accelerometers are designed with low gains to better capture strong shaking. Estimating earthquake magnitude rapidly from near-source strong-motion data requires integration of acceleration waveforms to displacement. However, integration amplifies small errors, creating unphysical drift that must be eliminated with a high pass filter. The loss of the long period information due to filtering is an impediment to magnitude estimation in real-time; the relation between ground motion measured with strong-motion instrumentation and magnitude saturates, leading to underestimation of earthquake magnitude. Using station displacements from Global Navigation Satellite System (GNSS) observations, we can supplement the high frequency information recorded by traditional seismic systems with long-period observations to better inform rapid response. Unlike seismic-only instrumentation, ground motions measured with GNSS scale with magnitude without saturation [Crowell et al., 2013; Melgar et al., 2015]. We refine the current magnitude scaling relations using peak ground displacement (PGD) by adding a large GNSS dataset of earthquakes in Japan. Because it does not suffer from saturation, GNSS alone has significant advantages over seismic-only instrumentation for rapid magnitude estimation of large events. The earthquake's magnitude can be estimated within 2-3 minutes of earthquake onset time [Melgar et al., 2013]. We demonstrate that seismogeodesy, the optimal combination of GNSS and seismic data at collocated stations, provides the added benefit of improving the sensitivity of

  4. Aftereffects of Subduction-Zone Earthquakes: Potential Tsunami Hazards along the Japan Sea Coast.

    PubMed

    Minoura, Koji; Sugawara, Daisuke; Yamanoi, Tohru; Yamada, Tsutomu

    2015-10-01

    The 2011 Tohoku-Oki Earthquake is a typical subduction-zone earthquake and is the 4th largest earthquake after the beginning of instrumental observation of earthquakes in the 19th century. In fact, the 2011 Tohoku-Oki Earthquake displaced the northeast Japan island arc horizontally and vertically. The displacement largely changed the tectonic situation of the arc from compressive to tensile. The 9th century in Japan was a period of natural hazards caused by frequent large-scale earthquakes. The aseismic tsunamis that inflicted damage on the Japan Sea coast in the 11th century were related to the occurrence of massive earthquakes that represented the final stage of a period of high seismic activity. Anti-compressive tectonics triggered by the subduction-zone earthquakes induced gravitational instability, which resulted in the generation of tsunamis caused by slope failing at the arc-back-arc boundary. The crustal displacement after the 2011 earthquake infers an increased risk of unexpected local tsunami flooding in the Japan Sea coastal areas.

  5. Integrated Geophysical and Geological Study of Earthquakes in Normally Aseismic Areas

    DTIC Science & Technology

    1976-01-01

    maximum Modified Mercalli Intensity X, Smith, 1962), the 1811 -1812 series of earthquakes near New Madrid , Missouri (maximum intensity XII, Fuller, 1912...sediments during the New Madrid earthquakes . Secondly, there are no known major faults with evidence of large scale movements since the Trlassic. In...1970, Seismic geology of the eastern United States: Assoc. Eng. Geologists Bull., v. 7, p. 21-43. Fuller, M.L., 1912, The New Madrid earthquake : U.S

  6. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  7. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  8. Search for Anisotropy Changes Associated with Two Large Earthquakes in Japan and New Zealand

    NASA Astrophysics Data System (ADS)

    Savage, M. K.; Graham, K.; Aoki, Y.; Arnold, R.

    2017-12-01

    Seismic anisotropy is often considered to be an indicator of stress in the crust, because the closure of cracks due to differential stress leads to waves polarized parallel to the cracks travelling faster than the orthogonal direction. Changes in shear wave splitting have been suggested to result from stress changes at volcanoes and earthquakes. However, the effects of mineral or structural alignment, and the difficulty of distinguishing between changes in anisotropy along an earthquake-station path from distinguishing changes in the path itself, have made such findings controversial. Two large earthquakes in 2016 provide unique datasets to test the use of shear wave splitting for measuring variations in stress because clusters of closely-spaced earthquakes occurred both before and after a mainshock. We use the automatic, objective splitting analysis code MFAST to speed process and minimize unwitting observer bias when determining time variations. The sequence of earthquakes related to the M=7.2 Japanese Kumamoto earthquake of 14 April 2016 includes both foreshocks, mainshocks and aftershocks. The sequence was recorded by the NIED permanent network, which already contributed background seismic anisotropy measurements in a previous study of anisotropy and stress in Kyushu. Preliminary measurements of shear wave splitting from earthquakes that occurred in 2016 show results at some stations that clearly differ from those of the earlier study. They also change between earthquakes recorded before and after the mainshock. Further work is under way to determine whether the changes are more likely due to changes in stress during the observation time, or due to spatial changes in anisotropy combined with changes in earthquake locations. Likewise, background seismicity and also foreshocks and aftershocks in the 2013 Cook Strait earthquake sequence including two M=6.5 earthquakes in 2013 in New Zealand were in the same general region as aftershocks of the M=7.8 Kaikoura

  9. Memory effect in M ≥ 6 earthquakes of South-North Seismic Belt, Mainland China

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2013-07-01

    The M ≥ 6 earthquakes occurred in the South-North Seismic Belt, Mainland China, during 1901-2008 are taken to study the possible existence of memory effect in large earthquakes. The fluctuation analysis technique is applied to analyze the sequences of earthquake magnitude and inter-event time represented in the natural time domain. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of earthquake magnitude and inter-event time. The migration of earthquakes in study is taken to discuss the possible correlation between events. The phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Together with all kinds of given information, we conclude that the earthquakes in study is short-term correlated and thus the short-term memory effect would be operative.

  10. Analysis of post-earthquake reconstruction for Wenchuan earthquake based on night-time light data from DMSP/OLS

    NASA Astrophysics Data System (ADS)

    Cao, Yang; Zhang, Jing; Yang, Mingxiang; Lei, Xiaohui

    2017-07-01

    At present, most of Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) night-time light data are applied to large-scale regional development assessment, while there are little for the study of earthquake and other disasters. This study has extracted night-time light information before and after earthquake within Wenchuan county with adoption of DMSP/OLS night-time light data. The analysis results show that the night-time light index and average intensity of Wenchuan county were decreased by about 76% and 50% respectively from the year of 2007 to 2008. From the year of 2008 to 2011, the two indicators were increased by about 200% and 556% respectively. These research results show that the night-time light data can be used to extract the information of earthquake and evaluate the occurrence of earthquakes and other disasters.

  11. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide

    USGS Publications Warehouse

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland

    2012-01-01

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  12. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  13. Memory effect in M ≥ 7 earthquakes of Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2014-07-01

    The M ≥ 7 earthquakes that occurred in the Taiwan region during 1906-2006 are taken to study the possibility of memory effect existing in the sequence of those large earthquakes. Those events are all mainshocks. The fluctuation analysis technique is applied to analyze two sequences in terms of earthquake magnitude and inter-event time represented in the natural time domain. For both magnitude and inter-event time, the calculations are made for three data sets, i.e., the original order data, the reverse-order data, and that of the mean values. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of both magnitude and inter-event time data. In addition, the phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Results lead to a negative answer. Together with all types of information in study, we make a conclusion that the earthquake sequence in study is short-term corrected and thus the short-term memory effect would be operative.

  14. Characteristics of a Sensitive Well Showing Pre-Earthquake Water-Level Changes

    NASA Astrophysics Data System (ADS)

    King, Chi-Yu

    2018-04-01

    Water-level data recorded at a sensitive well next to a fault in central Japan between 1989 and 1998 showed many coseismic water-level drops and a large (60 cm) and long (6-month) pre-earthquake drop before a rare local earthquake of magnitude 5.8 on 17 March 1997, as well as 5 smaller pre-earthquake drops during a 7-year period prior to this earthquake. The pre-earthquake changes were previously attributed to leakage through the fault-gouge zone caused by small but broad-scaled crustal-stress increments. These increments now seem to be induced by some large slow-slip events. The coseismic changes are attributed to seismic shaking-induced fissures in the adjacent aquitards, in addition to leakage through the fault. The well's high-sensitivity is attributed to its tapping a highly permeable aquifer, which is connected to the fractured side of the fault, and its near-critical condition for leakage, especially during the 7 years before the magnitude 5.8 earthquake.

  15. Increasing critical sensitivity of the Load/Unload Response Ratio before large earthquakes with identified stress accumulation pattern

    NASA Astrophysics Data System (ADS)

    Yu, Huai-zhong; Shen, Zheng-kang; Wan, Yong-ge; Zhu, Qing-yong; Yin, Xiang-chu

    2006-12-01

    The Load/Unload Response Ratio (LURR) method is proposed for short-to-intermediate-term earthquake prediction [Yin, X.C., Chen, X.Z., Song, Z.P., Yin, C., 1995. A New Approach to Earthquake Prediction — The Load/Unload Response Ratio (LURR) Theory, Pure Appl. Geophys., 145, 701-715]. This method is based on measuring the ratio between Benioff strains released during the time periods of loading and unloading, corresponding to the Coulomb Failure Stress change induced by Earth tides on optimally oriented faults. According to the method, the LURR time series usually climb to an anomalously high peak prior to occurrence of a large earthquake. Previous studies have indicated that the size of critical seismogenic region selected for LURR measurements has great influence on the evaluation of LURR. In this study, we replace the circular region usually adopted in LURR practice with an area within which the tectonic stress change would mostly affect the Coulomb stress on a potential seismogenic fault of a future event. The Coulomb stress change before a hypothetical earthquake is calculated based on a simple back-slip dislocation model of the event. This new algorithm, by combining the LURR method with our choice of identified area with increased Coulomb stress, is devised to improve the sensitivity of LURR to measure criticality of stress accumulation before a large earthquake. Retrospective tests of this algorithm on four large earthquakes occurred in California over the last two decades show remarkable enhancement of the LURR precursory anomalies. For some strong events of lesser magnitudes occurred in the same neighborhoods and during the same time periods, significant anomalies are found if circular areas are used, and are not found if increased Coulomb stress areas are used for LURR data selection. The unique feature of this algorithm may provide stronger constraints on forecasts of the size and location of future large events.

  16. The relationship between earthquake exposure and posttraumatic stress disorder in 2013 Lushan earthquake

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Lu, Yi

    2018-01-01

    The objective of this study is to explore the relationship between earthquake exposure and the incidence of PTSD. A stratification random sample survey was conducted to collect data in the Longmenshan thrust fault after Lushan earthquake three years. We used the Children's Revised Impact of Event Scale (CRIES-13) and the Earthquake Experience Scale. Subjects in this study included 3944 school student survivors in local eleven schools. The prevalence of probable PTSD is relatively higher, when the people was trapped in the earthquake, was injured in the earthquake or have relatives who died in the earthquake. It concluded that researchers need to pay more attention to the children and adolescents. The government should pay more attention to these people and provide more economic support.

  17. Two grave issues concerning the expected Tokai Earthquake

    NASA Astrophysics Data System (ADS)

    Mogi, K.

    2004-08-01

    The possibility of a great shallow earthquake (M 8) in the Tokai region, central Honshu, in the near future was pointed out by Mogi in 1969 and by the Coordinating Committee for Earthquake Prediction (CCEP), Japan (1970). In 1978, the government enacted the Large-Scale Earthquake Countermeasures Law and began to set up intensified observations in this region for short-term prediction of the expected Tokai earthquake. In this paper, two serious issues are pointed out, which may contribute to catastrophic effects in connection with the Tokai earthquake: 1. The danger of black-and-white predictions: According to the scenario based on the Large-Scale Earthquake Countermeasures Law, if abnormal crustal changes are observed, the Earthquake Assessment Committee (EAC) will determine whether or not there is an imminent danger. The findings are reported to the Prime Minister who decides whether to issue an official warning statement. Administrative policy clearly stipulates the measures to be taken in response to such a warning, and because the law presupposes the ability to predict a large earthquake accurately, there are drastic measures appropriate to the situation. The Tokai region is a densely populated region with high social and economic activity, and it is traversed by several vital transportation arteries. When a warning statement is issued, all transportation is to be halted. The Tokyo capital region would be cut off from the Nagoya and Osaka regions, and there would be a great impact on all of Japan. I (the former chairman of EAC) maintained that in view of the variety and complexity of precursory phenomena, it was inadvisable to attempt a black-and-white judgment as the basis for a "warning statement". I urged that the government adopt a "soft warning" system that acknowledges the uncertainty factor and that countermeasures be designed with that uncertainty in mind. 2. The danger of nuclear power plants in the focal region: Although the possibility of the

  18. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  19. Far-Field Effects of Large Earthquakes on South Florida's Confined Aquifer

    NASA Astrophysics Data System (ADS)

    Voss, N. K.; Wdowinski, S.

    2012-12-01

    The similarity between a seismometer and a well hydraulic head record during the passage of a seismic wave has long been documented. This is true even at large distances from earthquake epicenters. South Florida lacks a dense seismic array but does contain a comparably dense network of monitoring wells. The large spatial distribution of deep monitoring wells in South Florida provides an opportunity to study the variance of aquifer response to the passage of seismic waves. We conducted a preliminary study of hydraulic head data, provided by the South Florida Water Management District, from 9 deep wells in South Florida's confined Floridian Aquifer in response to 27 main shock events (January 2010- April 2012) with magnitude 6.9 or greater. Coseismic hydraulic head response was observed in 7 of the 27 events. In order to determine what governs aquifer response to seismic events, earthquake parameters were compared for the 7 positive events. Seismic energy density (SED), an empirical relationship between distance and magnitude, was also used to compare the relative energy between the events at each well site. SED is commonly used as a parameter for establishing thresholds for hydrologic events in the near and intermediate fields. Our analysis yielded a threshold SED for well response in South Florida as 8 x 10-3 J m-3, which is consistent with other studies. Deep earthquakes, with SED above this threshold, did not appear to trigger hydraulic head oscillations. The amplitude of hydraulic head oscillations had no discernable relationship to SED levels. Preliminary results indicate a need for a modification of the SED equation to better accommodate depth in order to be of use in the study of hydrologic response in the far field. We plan to conduct a more comprehensive study incorporating a larger subset (~60) of wells in South Florida in order to further examine the spatial variance of aquifers to the passing of seismic waves as well as better confine the relationship

  20. New ideas about the physics of earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Klein, William

    1995-07-01

    It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.

  1. Multi-Scale Structure and Earthquake Properties in the San Jacinto Fault Zone Area

    NASA Astrophysics Data System (ADS)

    Ben-Zion, Y.

    2014-12-01

    I review multi-scale multi-signal seismological results on structure and earthquake properties within and around the San Jacinto Fault Zone (SJFZ) in southern California. The results are based on data of the southern California and ANZA networks covering scales from a few km to over 100 km, additional near-fault seismometers and linear arrays with instrument spacing 25-50 m that cross the SJFZ at several locations, and a dense rectangular array with >1100 vertical-component nodes separated by 10-30 m centered on the fault. The structural studies utilize earthquake data to image the seismogenic sections and ambient noise to image the shallower structures. The earthquake studies use waveform inversions and additional time domain and spectral methods. We observe pronounced damage regions with low seismic velocities and anomalous Vp/Vs ratios around the fault, and clear velocity contrasts across various sections. The damage zones and velocity contrasts produce fault zone trapped and head waves at various locations, along with time delays, anisotropy and other signals. The damage zones follow a flower-shape with depth; in places with velocity contrast they are offset to the stiffer side at depth as expected for bimaterial ruptures with persistent propagation direction. Analysis of PGV and PGA indicates clear persistent directivity at given fault sections and overall motion amplification within several km around the fault. Clear temporal changes of velocities, probably involving primarily the shallow material, are observed in response to seasonal, earthquake and other loadings. Full source tensor properties of M>4 earthquakes in the complex trifurcation area include statistically-robust small isotropic component, likely reflecting dynamic generation of rock damage in the source volumes. The dense fault zone instruments record seismic "noise" at frequencies >200 Hz that can be used for imaging and monitoring the shallow material with high space and time details, and

  2. New insights into Kilauea's volcano dynamics brought by large-scale relative relocation of microearthquakes

    USGS Publications Warehouse

    Got, J.-L.; Okubo, P.

    2003-01-01

    We investigated the microseismicity recorded in an active volcano to infer information concerning the volcano structure and long-term dynamics, by using relative relocations and focal mechanisms of microearthquakes. There were 32,000 earthquakes of the Mauna Loa and Kilauea volcanoes recorded by more than eight stations of the Hawaiian Volcano Observatory seismic network between 1988 and 1999. We studied 17,000 of these events and relocated more than 70%, with an accuracy ranging from 10 to 500 m. About 75% of these relocated events are located in the vicinity of subhorizontal decollement planes, at a depth of 8-11 km. However, the striking features revealed by these relocation results are steep southeast dipping fault planes working as reverse faults, clearly located below the decollement plane and which intersect it. If this decollement plane coincides with the pre-Mauna Loa seafloor, as hypothesized by numerous authors, such reverse faults rupture the pre-Mauna Loa oceanic crust. The weight of the volcano and pressure in the magma storage system are possible causes of these ruptures, fully compatible with the local stress tensor computed by Gillard et al. [1996]. Reverse faults are suspected of producing scarps revealed by kilometer-long horizontal slip-perpendicular lineations along the decollement surface and therefore large-scale roughness, asperities, and normal stress variations. These are capable of generating stick-slip, large-magnitude earthquakes, the spatial microseismic pattern observed in the south flank of Kilauea volcano, and Hilina-type instabilities. Rupture intersecting the decollement surface, causing its large-scale roughness, may be an important parameter controlling the growth of Hawaiian volcanoes.

  3. Earthquakes in southern Dalmatia and coastal Montenegro before the large 6 April 1667 event

    NASA Astrophysics Data System (ADS)

    Albini, Paola; Rovida, Andrea

    2018-05-01

    The fourteenth to seventeenth century seismicity of southern Dalmatia (Croatia) and coastal Montenegro deserved to be fully reappraised because of the ascertained imperfect knowledge offered by modern seismological studies and of the awareness of the smokescreen effect due to the large 6 April 1667 M 6.4 earthquake that impacted exactly the area of study. The investigation consisted of (i) a reconsideration of earthquake records made available by previous studies and (ii) a systematic analysis of historical sources contemporary to the earthquakes, especially those not yet taken into account in seismological studies. The 168 contemporary and independent records collected cast a different light on more than 300 years of seismicity of this area. Records are reckoned to be unevenly distributed among the 39 studied earthquakes, out of which 15 still rely upon a single testimony. Each record has been reevaluated with respect to its content and attributed a level of reliability, which for those reporting other 14 events was so low to prevent us from confirming their real occurrence. Completely unreliable records have been identified and discussed, to conclude that they are at the root of five fake earthquakes. Altogether, 34 intensity values in EMS-98 were assessed related to 15 moderate and five damaging earthquakes. Existing and newly obtained data contributed to putting the pre-1667 seismicity of southern Dalmatia and coastal Montenegro into a substantially different perspective.

  4. Earthquakes in southern Dalmatia and coastal Montenegro before the large 6 April 1667 event

    NASA Astrophysics Data System (ADS)

    Albini, Paola; Rovida, Andrea

    2018-02-01

    The fourteenth to seventeenth century seismicity of southern Dalmatia (Croatia) and coastal Montenegro deserved to be fully reappraised because of the ascertained imperfect knowledge offered by modern seismological studies and of the awareness of the smokescreen effect due to the large 6 April 1667 M 6.4 earthquake that impacted exactly the area of study. The investigation consisted of (i) a reconsideration of earthquake records made available by previous studies and (ii) a systematic analysis of historical sources contemporary to the earthquakes, especially those not yet taken into account in seismological studies. The 168 contemporary and independent records collected cast a different light on more than 300 years of seismicity of this area. Records are reckoned to be unevenly distributed among the 39 studied earthquakes, out of which 15 still rely upon a single testimony. Each record has been reevaluated with respect to its content and attributed a level of reliability, which for those reporting other 14 events was so low to prevent us from confirming their real occurrence. Completely unreliable records have been identified and discussed, to conclude that they are at the root of five fake earthquakes. Altogether, 34 intensity values in EMS-98 were assessed related to 15 moderate and five damaging earthquakes. Existing and newly obtained data contributed to putting the pre-1667 seismicity of southern Dalmatia and coastal Montenegro into a substantially different perspective.

  5. Geodetic characteristic of the postseismic deformation following the interplate large earthquake along the Japan Trench (Invited)

    NASA Astrophysics Data System (ADS)

    Ohta, Y.; Hino, R.; Ariyoshi, K.; Matsuzawa, T.; Mishina, M.; Sato, T.; Inazu, D.; Ito, Y.; Tachibana, K.; Demachi, T.; Miura, S.

    2013-12-01

    On March 9, 2011 at 2:45 (UTC), an M7.3 interplate earthquake (hereafter foreshock) occurred ~45 km northeast of the epicenter of the M9.0 2011 Tohoku earthquake. This foreshock preceded the 2011 Tohoku earthquake by 51 hours. Ohta et al., (2012, GRL) estimated co- and postseismic afterslip distribution based on a dense GPS network and ocean bottom pressure gauge sites. They found the afterslip distribution was mainly concentrated in the up-dip extension of the coseismic slip. The coseismic slip and afterslip distribution of the foreshock were also located in the slip deficit region (between 20-40m slip) of the coiseismic slip of the M9.0 mainshock. The slip amount for the afterslip is roughly consistent with that determined by repeating earthquake analysis carried out in a previous study (Kato et al., 2012, Science). The estimated moment release for the afterslip reached magnitude 6.8, even within a short time period of 51 hours. They also pointed out that a volumetric strainmeter time series suggests that this event advanced with a rapid decay time constant (4.8 h) compared with other typical large earthquakes. The decay time constant of the afterslip may reflect the frictional property of the plate interface, especially effective normal stress controlled by fluid. For verification of the short decay time constant of the foreshock, we investigated the postseismic deformation characteristic following the 1989 and 1992 Sanriku-Oki earthquakes (M7.1 and M6.9), 2003 and 2005 Miyagi-Oki earthquakes (M6.8 and M7.2), and 2008 Fukushima-Oki earthquake (M6.9). We used four components extensometer at Miyako (39.59N, 141.98E) on the Sanriku coast for 1989 and 1992 event. For 2003, 2005 and 2008 events, we used volumetric strainmeter at Kinka-zan (38.27N, 141.58E) and Enoshima (38.27N, 141.60E). To extract the characteristics of the postseismic deformation, we fitted the logarithmic function. The estimated decay time constants for each earthquake had almost similar range (1

  6. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  7. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake.

    PubMed

    Donnellan, Andrea; Grant Ludwig, Lisa; Parker, Jay W; Rundle, John B; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott

    2015-09-01

    Tectonic motion across the Los Angeles region is distributed across an intricate network of strike-slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933  M 6.4 Long Beach and 1994  M 6.7 Northridge events. Here we show that Los Angeles regional thrust, strike-slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north-south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M 5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left-lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still-locked deeper structures. A future M 6.1-6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping.

  8. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake

    PubMed Central

    Grant Ludwig, Lisa; Parker, Jay W.; Rundle, John B.; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott

    2015-01-01

    Abstract Tectonic motion across the Los Angeles region is distributed across an intricate network of strike‐slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933 M6.4 Long Beach and 1994 M6.7 Northridge events. Here we show that Los Angeles regional thrust, strike‐slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north‐south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left‐lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still‐locked deeper structures. A future M6.1–6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping. PMID:27981074

  9. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  10. Novel doorways and resonances in large-scale classical systems

    NASA Astrophysics Data System (ADS)

    Franco-Villafañe, J. A.; Flores, J.; Mateos, J. L.; Méndez-Sánchez, R. A.; Novaro, O.; Seligman, T. H.

    2011-05-01

    We show how the concept of doorway states carries beyond its typical applications and usual concepts. The scale on which it may occur is increased to large classical wave systems. Specifically we analyze the seismic response of sedimentary basins covered by water-logged clays, a rather common situation for urban sites. A model is introduced in which the doorway state is a plane wave propagating in the interface between the sediments and the clay. This wave is produced by the coupling of a Rayleigh and an evanescent SP-wave. This in turn leads to a strong resonant response in the soft clays near the surface of the basin. Our model calculations are compared with measurements during Mexico City earthquakes, showing quite good agreement. This not only provides a transparent explanation of catastrophic resonant seismic response in certain basins but at the same time constitutes up to this date the largest-scale example of the doorway state mechanism in wave scattering. Furthermore the doorway state itself has interesting and rather unusual characteristics.

  11. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  12. Implications of the Mw9.0 Tohoku-Oki earthquake for ground motion scaling with source, path, and site parameters

    USGS Publications Warehouse

    Stewart, Jonathan P.; Midorikawa, Saburoh; Graves, Robert W.; Khodaverdi, Khatareh; Kishida, Tadahiro; Miura, Hiroyuki; Bozorgnia, Yousef; Campbell, Kenneth W.

    2013-01-01

    The Mw9.0 Tohoku-oki Japan earthquake produced approximately 2,000 ground motion recordings. We consider 1,238 three-component accelerograms corrected with component-specific low-cut filters. The recordings have rupture distances between 44 km and 1,000 km, time-averaged shear wave velocities of VS30 = 90 m/s to 1,900 m/s, and usable response spectral periods of 0.01 sec to >10 sec. The data support the notion that the increase of ground motions with magnitude saturates at large magnitudes. High-frequency ground motions demonstrate faster attenuation with distance in backarc than in forearc regions, which is only captured by one of the four considered ground motion prediction equations for subduction earthquakes. Recordings within 100 km of the fault are used to estimate event terms, which are generally positive (indicating model underprediction) at short periods and zero or negative (overprediction) at long periods. We find site amplification to scale minimally with VS30 at high frequencies, in contrast with other active tectonic regions, but to scale strongly with VS30 at low frequencies.

  13. Large Earthquakes at the Ibero-Maghrebian Region: Basis for an EEWS

    NASA Astrophysics Data System (ADS)

    Buforn, Elisa; Udías, Agustín; Pro, Carmen

    2015-09-01

    Large earthquakes (Mw > 6, Imax > VIII) occur at the Ibero-Maghrebian region, extending from a point (12ºW) southwest of Cape St. Vincent to Tunisia, with different characteristics depending on their location, which cause considerable damage and casualties. Seismic activity at this region is associated with the boundary between the lithospheric plates of Eurasia and Africa, which extends from the Azores Islands to Tunisia. The boundary at Cape St. Vincent, which has a clear oceanic nature in the westernmost part, experiences a transition from an oceanic to a continental boundary, with the interaction of the southern border of the Iberian Peninsula, the northern border of Africa, and the Alboran basin between them, corresponding to a wide area of deformation. Further to the east, the plate boundary recovers its oceanic nature following the northern coast of Algeria and Tunisia. The region has been divided into four zones with different seismic characteristics. From west to east, large earthquake occurrence, focal depth, total seismic moment tensor, and average seismic slip velocities for each zone along the region show the differences in seismic release of deformation. This must be taken into account in developing an EEWS for the region.

  14. Dynamical links between small- and large-scale mantle heterogeneity: Seismological evidence

    NASA Astrophysics Data System (ADS)

    Frost, Daniel A.; Garnero, Edward J.; Rost, Sebastian

    2018-01-01

    We identify PKP • PKP scattered waves (also known as P‧ •P‧) from earthquakes recorded at small-aperture seismic arrays at distances less than 65°. P‧ •P‧ energy travels as a PKP wave through the core, up into the mantle, then scatters back down through the core to the receiver as a second PKP. P‧ •P‧ waves are unique in that they allow scattering heterogeneities throughout the mantle to be imaged. We use array-processing methods to amplify low amplitude, coherent scattered energy signals and resolve their incoming direction. We deterministically map scattering heterogeneity locations from the core-mantle boundary to the surface. We use an extensive dataset with sensitivity to a large volume of the mantle and a location method allowing us to resolve and map more heterogeneities than have previously been possible, representing a significant increase in our understanding of small-scale structure within the mantle. Our results demonstrate that the distribution of scattering heterogeneities varies both radially and laterally. Scattering is most abundant in the uppermost and lowermost mantle, and a minimum in the mid-mantle, resembling the radial distribution of tomographically derived whole-mantle velocity heterogeneity. We investigate the spatial correlation of scattering heterogeneities with large-scale tomographic velocities, lateral velocity gradients, the locations of deep-seated hotspots and subducted slabs. In the lowermost 1500 km of the mantle, small-scale heterogeneities correlate with regions of low seismic velocity, high lateral seismic gradient, and proximity to hotspots. In the upper 1000 km of the mantle there is no significant correlation between scattering heterogeneity location and subducted slabs. Between 600 and 900 km depth, scattering heterogeneities are more common in the regions most remote from slabs, and close to hotspots. Scattering heterogeneities show an affinity for regions close to slabs within the upper 200 km of the

  15. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  16. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  17. Continuous, Large-Scale Processing of Seismic Archives for High-Resolution Monitoring of Seismic Activity and Seismogenic Properties

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2012-12-01

    Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building

  18. Near real-time finite fault source inversion for moderate-large earthquakes in Taiwan using teleseismic P waveform

    NASA Astrophysics Data System (ADS)

    Wong, T. P.; Lee, S. J.; Gung, Y.

    2017-12-01

    Taiwan is located at one of the most active tectonic regions in the world. Rapid estimation of the spatial slip distribution of moderate-large earthquake (Mw6.0) is important for emergency response. It is necessary to have a real-time system to provide the report immediately after earthquake happen. The earthquake activities in the vicinity of Taiwan can be monitored by Real-Time Moment Tensor Monitoring System (RMT) which provides the rapid focal mechanism and source parameters. In this study, we follow up the RMT system to develop a near real-time finite fault source inversion system for the moderate-large earthquakes occurred in Taiwan. The system will be triggered by the RMT System when an Mw6.0 is detected. According to RMT report, our system automatically determines the fault dimension, record length, and rise time. We adopted one segment fault plane with variable rake angle. The generalized ray theory was applied to calculate the Green's function for each subfault. The primary objective of the system is to provide the first order image of coseismic slip pattern and identify the centroid location on the fault plane. The performance of this system had been demonstrated by 23 big earthquakes occurred in Taiwan successfully. The results show excellent data fits and consistent with the solutions from other studies. The preliminary spatial slip distribution will be provided within 25 minutes after an earthquake occurred.

  19. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  20. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes

  1. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  2. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  3. Schoolteachers' Traumatic Experiences and Responses in the Context of a Large-Scale Earthquake in China

    ERIC Educational Resources Information Center

    Lei, B.

    2017-01-01

    This article investigates the traumatic experience of teachers who experienced the 2008 earthquake in Sichuan, China. A survey measuring participants' personal experiences, professional demands, and psychological responses was distributed to 241 teachers in five selected schools. Although the status of schoolteachers' trauma in a postdisaster…

  4. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  5. Critical behavior in earthquake energy dissipation

    NASA Astrophysics Data System (ADS)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  6. Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2013-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

  7. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  8. A post-Tohoku earthquake review of earthquake probabilities in the Southern Kanto District, Japan

    NASA Astrophysics Data System (ADS)

    Somerville, Paul G.

    2014-12-01

    The 2011 Mw 9.0 Tohoku earthquake generated an aftershock sequence that affected a large part of northern Honshu, and has given rise to widely divergent forecasts of changes in earthquake occurrence probabilities in northern Honshu. The objective of this review is to assess these forecasts as they relate to potential changes in the occurrence probabilities of damaging earthquakes in the Kanto Region. It is generally agreed that the 2011 Mw 9.0 Tohoku earthquake increased the stress on faults in the southern Kanto district. Toda and Stein (Geophys Res Lett 686, 40: doi:10.1002, 2013) further conclude that the probability of earthquakes in the Kanto Corridor has increased by a factor of 2.5 for the time period 11 March 2013 to 10 March 2018 in the Kanto Corridor. Estimates of earthquake probabilities in a wider region of the Southern Kanto District by Nanjo et al. (Geophys J Int, doi:10.1093, 2013) indicate that any increase in the probability of earthquakes is insignificant in this larger region. Uchida et al. (Earth Planet Sci Lett 374: 81-91, 2013) conclude that the Philippine Sea plate the extends well north of the northern margin of Tokyo Bay, inconsistent with the Kanto Fragment hypothesis of Toda et al. (Nat Geosci, 1:1-6,2008), which attributes deep earthquakes in this region, which they term the Kanto Corridor, to a broken fragment of the Pacific plate. The results of Uchida and Matsuzawa (J Geophys Res 115:B07309, 2013)support the conclusion that fault creep in southern Kanto may be slowly relaxing the stress increase caused by the Tohoku earthquake without causing more large earthquakes. Stress transfer calculations indicate a large stress transfer to the Off Boso Segment as a result of the 2011 Tohoku earthquake. However, Ozawa et al. (J Geophys Res 117:B07404, 2012) used onshore GPS measurements to infer large post-Tohoku creep on the plate interface in the Off-Boso region, and Uchida and Matsuzawa (ibid.) measured similar large creep off the Boso

  9. The Validity and Reliability Work of the Scale That Determines the Level of the Trauma after the Earthquake

    ERIC Educational Resources Information Center

    Tanhan, Fuat; Kayri, Murat

    2013-01-01

    In this study, it was aimed to develop a short, comprehensible, easy, applicable, and appropriate for cultural characteristics scale that can be evaluated in mental traumas concerning earthquake. The universe of the research consisted of all individuals living under the effects of the earthquakes which occurred in Tabanli Village on 23.10.2011 and…

  10. Large landslides induced by the 2008 Wenchuan earthquake and their precursory gravitational slope deformation

    NASA Astrophysics Data System (ADS)

    Chigira, Masahiro; Wu, Xiyong; Wang, Gonghui; Uchida, Osamu

    2010-05-01

    2008 Wenchuan earthquake induced numerous large landslides, of which many large landslides had been preceded by gravitational deformation. The deformation could be detected by linear depressions and convex slopes observed on satellite images taken before the earthquake. Ground truth survey after the earthquake also found the gravitational deformation of rocks, which could be predated before the earthquake. The Daguanbao landslide, the largest landslide induced by this earthquake, occurred on a slope of bedded carbonate rocks. The area of the landslide, based on measurements made from the ALOS/PRISM images is 7.353 km2. Its volume is estimated to be 0.837 km3 based on the comparison of the PRISM data and the SRTM DEM. It had an open V-shaped main scarp, of which one linear part was along a high angle fault and the other was approximately parallel to the bedding strike. The upslope edge of the V-shaped main scarp was observed as 2- km long linear depressions along the ridge-top on satellite image before the landslide. This indicates that this slope had been already destabilized and small movement occurred along the bedding planes and along the fault before the event. The Wenchuan earthquake pulled the final trigger of this landslide. The major sliding surface was along the bedding plane, which was observed to dip 35° or slightly gentler. It was warped convex upward and the beds were fractured, which suggests that the beds were slightly buckled before the landslide. This deformation may correspond to the formation of the linear depression. The Tangjiashan landslide in Beichuan, which produced the largest landslide dam during the earthquake, occurred on a dip slope of shale and slate. The geologic structures of the landslide was observed on the side flanks of the landslide, which indicated that the beds had been buckled gravitationally beforehand and the sliding surface was made along the bedding plane and a joint parallel to the slope surface. The buckling

  11. An energy dependent earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    Spassiani, I.; Marzocchi, W.

    2017-12-01

    The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.

  12. Challenges to communicate risks of human-caused earthquakes

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2014-12-01

    The awareness of natural hazards has been up-trending in recent years. In particular, this is true for earthquakes, which increase in frequency and magnitude in regions that normally do not experience seismic activity. In fact, one of the major concerns for many communities and businesses is that humans today seem to cause earthquakes due to large-scale shale gas production, dewatering and flooding of mines and deep geothermal power production. Accordingly, without opposing any of these technologies it should be a priority of earth scientists who are researching natural hazards to communicate earthquake risks. This presentation discusses the challenges that earth scientists are facing to properly communicate earthquake risks, in light of the fact that human-caused earthquakes are an environmental change affecting only some communities and businesses. Communication channels may range from research papers, books and class room lectures to outreach events and programs, popular media events or even social media networks.

  13. Why are earthquakes nudging the pole towards 140°E?

    NASA Astrophysics Data System (ADS)

    Spada, Giorgio

    Earthquakes have collectively the tendency to displace the pole of rotation of the earth towards a preferred direction (∼140°E). This trend, which is still unexplained on quantitative grounds, has been revealed by computations of earthquake-induced inertia variations on both a secular and a decade time-scale. Purpose of this letter is to show that the above trend results from the combined effects of the geographical distribution of hypocenters and of the prevailing dip-slip nature of large earthquakes in this century. Our findings are based on the static dislocation theory and on simple geometrical arguments.

  14. Understanding and responding to earthquake hazards

    NASA Technical Reports Server (NTRS)

    Raymond, C. A.; Lundgren, P. R.; Madsen, S. N.; Rundle, J. B.

    2002-01-01

    Advances in understanding of the earthquake cycle and in assessing earthquake hazards is a topic of great importance. Dynamic earthquake hazard assessments resolved for a range of spatial scales and time scales will allow a more systematic approach to prioritizing the retrofitting of vulnerable structures, relocating populations at risk, protecting lifelines, preparing for disasters, and educating the public.

  15. The interaction between active normal faulting and large scale gravitational mass movements revealed by paleoseismological techniques: A case study from central Italy

    NASA Astrophysics Data System (ADS)

    Moro, M.; Saroli, M.; Gori, S.; Falcucci, E.; Galadini, F.; Messina, P.

    2012-05-01

    Paleoseismological techniques have been applied to characterize the kinematic behaviour of large-scale gravitational phenomena located in proximity of the seismogenic fault responsible for the Mw 7.0, 1915 Avezzano earthquake and to identify evidence of a possible coseismic reactivation. The above mentioned techniques were applied to the surface expression of the main sliding planes of the Mt. Serrone gravitational deformation, located in the southeastern border of the Fucino basin (central Italy). The approach allows us to detect instantaneous events of deformation along the uphill-facing scarp. These events are testified by the presence of faulted deposits and colluvial wedges. The identified and chronologically-constrained episodes of rapid displacement can be probably correlated with seismic events determined by the activation of the Fucino seismogenic fault, affecting the toe of the gravitationally unstable rock mass. Indeed this fault can produce strong, short-term dynamic stresses able to trigger the release of local gravitational stress accumulated by Mt. Serrone's large-scale gravitational phenomena. The applied methodology could allow us to better understand the geometric and kinematic relationships between active tectonic structures and large-scale gravitational phenomena. It would be more important in seismically active regions, since deep-seated gravitational slope deformations can evolve into a catastrophic collapse and can strongly increase the level of earthquake-induced hazards.

  16. Energy Partition and Variability of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kanamori, H.

    2003-12-01

    During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy

  17. The California Post-Earthquake Information Clearinghouse: A Plan to Learn From the Next Large California Earthquake

    NASA Astrophysics Data System (ADS)

    Loyd, R.; Walter, S.; Fenton, J.; Tubbesing, S.; Greene, M.

    2008-12-01

    In the rush to remove debris after a damaging earthquake, perishable data related to a wide range of impacts on the physical, built and social environments can be lost. The California Post-Earthquake Information Clearinghouse is intended to prevent this data loss by supporting the earth scientists, engineers, and social and policy researchers who will conduct fieldwork in the affected areas in the hours and days following the earthquake to study these effects. First called for by Governor Ronald Reagan following the destructive M6.5 San Fernando earthquake in 1971, the concept of the Clearinghouse has since been incorporated into the response plans of the National Earthquake Hazard Reduction Program (USGS Circular 1242). This presentation is intended to acquaint scientists with the purpose, functions, and services of the Clearinghouse. Typically, the Clearinghouse is set up in the vicinity of the earthquake within 24 hours of the mainshock and is maintained for several days to several weeks. It provides a location where field researchers can assemble to share and discuss their observations, plan and coordinate subsequent field work, and communicate significant findings directly to the emergency responders and to the public through press conferences. As the immediate response effort winds down, the Clearinghouse will ensure that collected data are archived and made available through "lessons learned" reports and publications that follow significant earthquakes. Participants in the quarterly meetings of the Clearinghouse include representatives from state and federal agencies, universities, NGOs and other private groups. Overall management of the Clearinghouse is delegated to the agencies represented by the authors above.

  18. Numerical Investigation of Earthquake Nucleation on a Laboratory-Scale Heterogeneous Fault with Rate-and-State Friction

    NASA Astrophysics Data System (ADS)

    Higgins, N.; Lapusta, N.

    2014-12-01

    Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have

  19. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    USGS Publications Warehouse

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.

  20. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.

    2017-06-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques. I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called ;moment deficit,; calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of ;earthquake super-cycles; observed in some global subduction zones.

  1. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006

  2. Systematic deficiency of aftershocks in areas of high coseismic slip for large subduction zone earthquakes

    PubMed Central

    Wetzler, Nadav; Lay, Thorne; Brodsky, Emily E.; Kanamori, Hiroo

    2018-01-01

    Fault slip during plate boundary earthquakes releases a portion of the shear stress accumulated due to frictional resistance to relative plate motions. Investigation of 101 large [moment magnitude (Mw) ≥ 7] subduction zone plate boundary mainshocks with consistently determined coseismic slip distributions establishes that 15 to 55% of all master event–relocated aftershocks with Mw ≥ 5.2 are located within the slip regions of the mainshock ruptures and few are located in peak slip regions, allowing for uncertainty in the slip models. For the preferred models, cumulative deficiency of aftershocks within the central three-quarters of the scaled slip regions ranges from 15 to 45%, increasing with the total number of observed aftershocks. The spatial gradients of the mainshock coseismic slip concentrate residual shear stress near the slip zone margins and increase stress outside the slip zone, driving both interplate and intraplate aftershock occurrence near the periphery of the mainshock slip. The shear stress reduction in large-slip regions during the mainshock is generally sufficient to preclude further significant rupture during the aftershock sequence, consistent with large-slip areas relocking and not rupturing again for a substantial time. PMID:29487902

  3. Source and Aftershock Analysis of a Large Deep Earthquake in the Tonga Flat Slab

    NASA Astrophysics Data System (ADS)

    Cai, C.; Wiens, D. A.; Warren, L. M.

    2013-12-01

    The 9 November 2009 (Mw 7.3) deep focus earthquake (depth = 591 km) occurred in the Tonga flat slab region, which is characterized by limited seismicity but has been imaged as a flat slab in tomographic imaging studies. In addition, this earthquake occurred immediately beneath the largest of the Fiji Islands and was well recorded by a temporary array of 16 broadband seismographs installed in Fiji and Tonga, providing an excellent opportunity to study the source mechanism of a deep earthquake in a partially aseismic flat slab region. We determine the positions of main shock hypocenter, its aftershocks and moment release subevents relative to the background seismicity using a hypocentroidal decomposition relative relocation method. We also investigate the rupture directivity by measuring the variation of rupture durations at different azimuth [e.g., Warren and Silver, 2006]. Arrival times picked from the local seismic stations together with teleseismic arrival times from the International Seismological Centre (ISC) are used for the relocation. Teleseismic waveforms are used for directivity study. Preliminary results show this entire region is relatively aseismic, with diffuse background seismicity distributed between 550-670 km. The main shock happened in a previously aseismic region, with only 1 small earthquake within 50 km during 1980-2012. 11 aftershocks large enough for good locations all occurred within the first 24 hours following the earthquake. The aftershock zone extends about 80 km from NW to SE, covering a much larger area than the mainshock rupture. The aftershock distribution does not correspond to the main shock fault plane, unlike the 1994 March 9 (Mw 7.6) Fiji-Tonga earthquake in the steeply dipping, highly seismic part of the Tonga slab. Mainshock subevent locations suggest a sub-horizontal SE-NW rupture direction. However, the directivity study shows a complicated rupture process which could not be solved with simple rupture assumption. We will

  4. 3-D structure of ionospheric anomalies immediately before large earthquakes: the 2015 Illapel (Mw8.3) and 2016 Kumamoto (Mw7.0) cases

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.; Muafiry, I. N.

    2016-12-01

    We developed a simple program to perform three-dimensional (3-D) tomography of ionospheric anomalies observed using Global Navigation Satellite System (GNSS), and applied it for cases of ionospheric anomalies prior to two recent earthquakes, i.e. (1) positive and negative TEC anomalies starting 20 minutes before the 2015 September Illapel earthquake, Central Chile, and (2) stagnant MSTID that appeared 20-30 minutes before the 2016 April Kumamoto earthquake (mainshock), Kyushu, SW Japan, and stayed there until the earthquake occurred. Regarding (1), we analyzed GNSS data before and after three large earthquakes in Chile, and have reported that both positive and negative anomalies of ionospheric Total Electron Content (TEC) started 40 minutes (2010 Maule) and 20 minutes (2014 Iquique and 2015 Illapel) before earthquakes in He and Heki (2016 GRL). For the 2015 event, we further suggested that positive and negative anomalies occurred at altitudes of 200 and 400 km, respectively. This makes the epicenter, the positive anomaly, and the negative anomaly line up along the local geomagnetic field, consistent with the structure expected to occur in response to surface positive charges (e.g. Kuo et al., 2014 JGR). As for (2), we looked for ionospheric anomalies before the foreshock (Mw6.2) and the mainshock (Mw7.0) of the 2016 Kumamoto earthquakes, shallow inland earthquakes, using TEC derived from the Japanese dense GNSS network. Although we did not find anomalies as often seen before larger earthquakes (e.g. Heki and Enomoto, 2015 JGR), we found that a stationary linear positive TEC anomaly, with a shape similar to a night-time medium-scale traveling ionospheric disturbance (MSTID), emerged just above the epicenter 20 minutes before the mainshock. Unlike typical night-time MSTID, it did not propagate southwestward; instead, its positive crest stayed above the epicenter for 30 min. (see attached figure). This unusual behavior might be linked to crust-origin electric fields.

  5. Incredibly distant ionospheric responses to earthquake

    NASA Astrophysics Data System (ADS)

    Yusupov, Kamil; Akchurin, Adel

    2015-04-01

    area of medium-scale wave (387 km), which ionograms showed F-spread rather than MCS. Obviously, this is due to the vertical structure of the disturbance in the near zone. Another interesting feature associated with the vertical structure is a 1-2 minute advance of the appearance MCS in ionograms in relation to the advent of large-scale TEC disturbance. Naturally, such appearance time comparison can only be in such distances, when there are large-scale TEC disturbances (<1000-1200 km). Only MCS and Doppler shifts are observing at large distances. Look-back analysis of Japanese ionograms showed only eight cases of ionogram MCS observation from 43 strongest earthquakes (magnitude> 8) during the period from 1957-2011. This indirectly explains why it had to wait 50 years to recognize the MCS as a response to the earthquake. Previously performed statistical analyses showed that the MCS appear mainly from 9 to 15 LT and the epicentre distances range is the 800-6000 km. The MCS signatures at distances removing from earthquake epicentre more than 6000 km seen in ionosondes in Kazan, Kaliningrad and Sodankyla. These MCS in Kazan (as well in Kaliningrad, in Sodankyla) observed during the daytime from 9 to 15 LT. At this time, the height electron concentration gradient is significantly reducing in the F1-layer. This leads to the fact that a small disturbance of this gradient distorts some area of electron density profile and it reduces the value of the local gradient to zero (or even negative) values. Observations in our ionosonde first showed that the ionospheric response to the strong earthquakes (magnitude more than 8) could be observing at distances more than 15,000 km. In the daytime such responses appearance distort the form of the electron density profile of the F-layer, which is appearing in the ionograms as a multiple trace stratification of F1-layer.

  6. The evolution of hillslope strength following large earthquakes

    NASA Astrophysics Data System (ADS)

    Brain, Matthew; Rosser, Nick; Tunstall, Neil

    2017-04-01

    Earthquake-induced landslides play an important role in the evolution of mountain landscapes. Earthquake ground shaking triggers near-instantaneous landsliding, but has also been shown to weaken hillslopes, preconditioning them for failure during subsequent seismicity and/or precipitation events. The temporal evolution of hillslope strength during and following primary seismicity, and if and how this ultimately results in failure, is poorly constrained due to the rarity of high-magnitude earthquakes and limited availability of suitable field datasets. We present results obtained from novel geotechnical laboratory tests to better constrain the mechanisms that control strength evolution in Earth materials of differing rheology. We consider how the strength of hillslope materials responds to ground-shaking events of different magnitude and if and how this persists to influence landslide activity during interseismic periods. We demonstrate the role of stress path and stress history, strain rate and foreshock and aftershock sequences in controlling the evolution of hillslope strength and stability. Critically, we show how hillslopes can be strengthened rather than weakened in some settings, challenging conventional assumptions. On the basis of our laboratory data, we consider the implications for earthquake-induced geomorphic perturbations in mountain landscapes over multiple timescales and in different seismogenic settings.

  7. Large-scale fault interactions at the termination of a subduction margin

    NASA Astrophysics Data System (ADS)

    Mouslopoulou, V.; Nicol, A., , Prof; Moreno, M.; Oncken, O.; Begg, J.; Kufner, S. K.

    2017-12-01

    Active subduction margins terminate against, and transfer their slip onto, plate-boundary transform faults. The manner in which plate motion is accommodated and partitioned across such kinematic transitions from thrust to strike-slip faulting over earthquake timescales, is poorly documented. The 2016 November 14th, Mw 7.8 Kaikoura Earthquake provides a rare snapshot of how seismic-slip may be accommodated at the tip of an active subduction margin. Analysis of uplift data collected using a range of techniques (field measurements, GPS, LiDAR) and published mapping coupled with 3D dislocation modelling indicates that earthquake-slip ruptured multiple faults with various orientations and slip mechanisms. Modelled and measured uplift patterns indicate that slip on the plate-interface was minor. Instead, a large offshore thrust fault, modelled to splay-off the plate-interface and to extend to the seafloor up to 15 km east of the South Island, appears to have released subduction-related strain and to have facilitated slip on numerous, strike-slip and oblique-slip faults on its hanging-wall. The Kaikoura earthquake suggests that these large splay-thrust faults provide a key mechanism in the transfer of plate motion at the termination of a subduction margin and represent an important seismic hazard.

  8. Long-Period Ground Motion due to Near-Shear Earthquake Ruptures

    NASA Astrophysics Data System (ADS)

    Koketsu, K.; Yokota, Y.; Hikima, K.

    2010-12-01

    Long-period ground motion has become an increasingly important consideration because of the recent rapid increase in the number of large-scale structures, such as high-rise buildings and large oil storage tanks. Large subduction-zone earthquakes and moderate to large crustal earthquakes can generate far-source long-period ground motions in distant sedimentary basins with the help of path effects. Near-fault long-period ground motions are generated, for the most part, by the source effects of forward rupture directivity (Koketsu and Miyake, 2008). This rupture directivity effect is the maximum in the direction of fault rupture when a rupture velocity is nearly equal to shear wave velocity around a source fault (Dunham and Archuleta, 2005). The near-shear rupture was found to occur during the 2008 Mw 7.9 Wenchuan earthquake at the eastern edge of the Tibetan plateau (Koketsu et al., 2010). The variance of waveform residuals in a joint inversion of teleseismic and strong motion data was the minimum when we adopted a rupture velocity of 2.8 km/s, which is close to the shear wave velocity of 2.6 km/s around the hypocenter. We also found near-shear rupture during the 2010 Mw 6.9 Yushu earthquake (Yokota et al., 2010). The optimum rupture velocity for an inversion of teleseismic data is 3.5 km/s, which is almost equal to the shear wave velocity around the hypocenter. Since, in addition, supershear rupture was found during the 2001 Mw 7.8 Central Kunlun earthquake (Bouchon and Vallee, 2003), such fast earthquake rupture can be a characteristic of the eastern Tibetan plateau. Huge damage in Yingxiu and Beichuan from the 2008 Wenchuan earthquake and damage heavier than expected in the county seat of Yushu from the medium-sized Yushu earthquake can be attributed to the maximum rupture directivity effect in the rupture direction due to near-shear earthquake ruptures.

  9. Earthquake behavior along the Levant fault from paleoseismology (Invited)

    NASA Astrophysics Data System (ADS)

    Klinger, Y.; Le Beon, M.; Wechsler, N.; Rockwell, T. K.

    2013-12-01

    The Levant fault is a major continental structure 1200 km-long that bounds the Arabian plate to the west. The finite offset of this left-lateral strike-slip fault is estimated to be 105 km for the section located south of the restraining bend corresponding roughly to Lebanon. Along this southern section the slip-rate has been estimated over a large range of time scales, from few years to few hundreds thousands of years. Over these different time scales, studies agree for the slip-rate to be 5mm/yr × 2 mm/yr. The southern section of the Levant fault is particularly attractive to study earthquake behavior through time for several reasons: 1/ The fault geometry is simple and well constrained. 2/ The fault system is isolated and does not interact with obvious neighbor fault systems. 3/ The Middle-East, where the Levant fault is located, is the region in the world where one finds the longest and most complete historical record of past earthquakes. About 30 km north of the city of Aqaba, we opened a trench in the southern part of the Yotvata playa, along the Wadi Araba fault segment. The stratigraphy presents silty sand playa units alternating with coarser sand sediments from alluvial fans flowing westwards from the Jordan plateau. Two fault zones can be recognized in the trench and a minimum of 8 earthquakes can be identified, based on upward terminations of ground ruptures. Dense 14C dating through the entire exposure allows matching the 4 most recent events with historical events in AD1458, AD1212, AD1068 and AD748. Size of the ground rupture suggests a bi-modal distribution of earthquakes with earthquakes rupturing the entire Wadi Araba segment and earthquakes ending in the extensional jog forming the playa. Timing of earthquakes shows that no earthquakes occurred at this site since about 600 years, suggesting earthquake clustering along this section of the fault and potential for a large earthquake in the near future. 3D paleoseismological trenches at the Beteiha

  10. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.; Herman, Matthew W.; Barnhart, William D.; Furlong, Kevin P.; Riquelme, Sebástian; Benz, Harley M.; Bergman, Eric; Barrientos, Sergio; Earle, Paul S.; Samsonov, Sergey

    2014-01-01

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile which had not ruptured in a megathrust earthquake since a M ~8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March–April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  11. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake.

    PubMed

    Hayes, Gavin P; Herman, Matthew W; Barnhart, William D; Furlong, Kevin P; Riquelme, Sebástian; Benz, Harley M; Bergman, Eric; Barrientos, Sergio; Earle, Paul S; Samsonov, Sergey

    2014-08-21

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which had not ruptured in a megathrust earthquake since a M ∼8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March-April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  12. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  13. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  14. Contrasting styles of large-scale displacement of unconsolidated sand: examples from the early Jurassic Navajo Sandstone on the Colorado Plateau, USA

    NASA Astrophysics Data System (ADS)

    Bryant, Gerald

    2015-04-01

    Large-scale soft-sediment deformation features in the Navajo Sandstone have been a topic of interest for nearly 40 years, ever since they were first explored as a criterion for discriminating between marine and continental processes in the depositional environment. For much of this time, evidence for large-scale sediment displacements was commonly attributed to processes of mass wasting. That is, gravity-driven movements of surficial sand. These slope failures were attributed to the inherent susceptibility of dune sand responding to environmental triggers such as earthquakes, floods, impacts, and the differential loading associated with dune topography. During the last decade, a new wave of research is focusing on the event significance of deformation features in more detail, revealing a broad diversity of large-scale deformation morphologies. This research has led to a better appreciation of subsurface dynamics in the early Jurassic deformation events recorded in the Navajo Sandstone, including the important role of intrastratal sediment flow. This report documents two illustrative examples of large-scale sediment displacements represented in extensive outcrops of the Navajo Sandstone along the Utah/Arizona border. Architectural relationships in these outcrops provide definitive constraints that enable the recognition of a large-scale sediment outflow, at one location, and an equally large-scale subsurface flow at the other. At both sites, evidence for associated processes of liquefaction appear at depths of at least 40 m below the original depositional surface, which is nearly an order of magnitude greater than has commonly been reported from modern settings. The surficial, mass flow feature displays attributes that are consistent with much smaller-scale sediment eruptions (sand volcanoes) that are often documented from modern earthquake zones, including the development of hydraulic pressure from localized, subsurface liquefaction and the subsequent escape of

  15. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  16. Extending earthquakes' reach through cascading.

    PubMed

    Marsan, David; Lengliné, Olivier

    2008-02-22

    Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.

  17. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  18. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction

  19. Precursory enhancement of EIA in the morning sector: Contribution from mid-latitude large earthquakes in the north-east Asian region

    NASA Astrophysics Data System (ADS)

    Ryu, Kwangsun; Oyama, Koh-Ichiro; Bankov, Ludmil; Chen, Chia-Hung; Devi, Minakshi; Liu, Huixin; Liu, Jann-Yenq

    2016-01-01

    To investigate whether the link between seismic activity and EIA (equatorial ionization anomaly) enhancement is valid for mid-latitude seismic activity, DEMETER observations around seven large earthquakes in the north-east Asian region were fully analyzed (M ⩾ 6.8). In addition, statistical analysis was performed for 35 large earthquakes (M ⩾ 6.0) that occurred during the DEMETER observation period. The results suggest that mid-latitude earthquakes do contribute to EIA enhancement, represented as normalized equatorial Ne , and that ionospheric change precedes seismic events, as has been reported in previous studies. According to statistical studies, the normalized equatorial density enhancement is sensitive and proportional to both the magnitude and the hypocenter depth of an earthquake. The mechanisms that can explain the contribution of mid-latitude seismic activity to EIA variation are briefly discussed based on current explanations of the geochemical and ionospheric processes involved in lithosphere-ionosphere interaction.

  20. How fault geometry controls earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  1. Global assessment of human losses due to earthquakes

    USGS Publications Warehouse

    Silva, Vitor; Jaiswal, Kishor; Weatherill, Graeme; Crowley, Helen

    2014-01-01

    Current studies have demonstrated a sharp increase in human losses due to earthquakes. These alarming levels of casualties suggest the need for large-scale investment in seismic risk mitigation, which, in turn, requires an adequate understanding of the extent of the losses, and location of the most affected regions. Recent developments in global and uniform datasets such as instrumental and historical earthquake catalogues, population spatial distribution and country-based vulnerability functions, have opened an unprecedented possibility for a reliable assessment of earthquake consequences at a global scale. In this study, a uniform probabilistic seismic hazard assessment (PSHA) model was employed to derive a set of global seismic hazard curves, using the open-source software OpenQuake for seismic hazard and risk analysis. These results were combined with a collection of empirical fatality vulnerability functions and a population dataset to calculate average annual human losses at the country level. The results from this study highlight the regions/countries in the world with a higher seismic risk, and thus where risk reduction measures should be prioritized.

  2. Lisbon 1755, a multiple-rupture earthquake

    NASA Astrophysics Data System (ADS)

    Fonseca, J. F. B. D.

    2017-12-01

    The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the

  3. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  4. FORESHOCK AND ATERSHOCK SEQUENCES OF SOME LARGE EARTHQUAKES IN THE REGION OF GREECE,

    DTIC Science & Technology

    or more foreshocks of magnitude larger than 3.8 occurred in forty per cent of the cases. The probability for an earthquake to be preceded by a large... foreshock not much smaller than the main shock is 10%. It is shown that some properties of the earth’s material in the aftershock region can be

  5. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  6. A New Correlation of Large Earthquakes Along the Southern San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Scharer, K. M.; Weldon, R. J.; Biasi, G. P.

    2010-12-01

    There are now three sites on the southern San Andreas fault (SSAF) with records of 10 or more dated ground rupturing earthquakes (Frazier Mountain, Wrightwood and Pallett Creek) and at least seven other sites with 3-5 dated events. Numerous sites have related information including geomorphic offsets caused by 1 to a few earthquakes, a known amount of slip spanning a specific interval of time or number of earthquakes, or the number (but not necessarily the exact ages) of earthquakes in an interval of time. We use this information to construct a record of recent large earthquakes on the SSAF. Strongly overlapping C-14 age ranges, especially between closely spaced sites like Pallett Creek and Wrightwood on the Mojave segment and Thousand Palms, Indio, Coachella and Salt Creek on the southernmost 100 kms of the fault, and overlap between the more distant Frazier Mountain and Bidart Fan sites on the northernmost part of the fault suggest that the paleoseismic data are robust and can be explained by a relatively small number of events that span substantial portions of the fault. This is consistent with the extent of rupture of the two historic events (1857 was ~300 km long and 1812 was 100-200 km long); slip per event data that averages 3-5 m per event at most sites; and the long historical hiatus since 1857. While some sites have smaller offsets for individual events, correlation between sites suggests that many small offsets are near the end of long ruptures. While the long event series on the Mojave are quasi-periodic, individual intervals range about an order of magnitude, from a few decades up to ~200 years. This wide range of intervals and the apparent anti-slip predictable behavior of ruptures (small intervals are not followed by small events) suggest weak clustering or periods of time spanning multiple intervals when strain release is higher low lower than average. These properties defy the application of simple hazard analysis but need to be understood to

  7. Study of the Seismic Cycle of large Earthquakes in central Peru: Lima Region

    NASA Astrophysics Data System (ADS)

    Norabuena, E. O.; Quiroz, W.; Dixon, T. H.

    2009-12-01

    Since historical times, the Peruvian subduction zone has been source of large and destructive earthquakes. The more damaging one occurred on May 30 1970 offshore Peru’s northern city of Chimbote with a death toll of 70,000 people and several hundred US million dollars in property damage. More recently, three contiguous plate interface segments in southern Peru completed their seismic cycle generating the 1996 Nazca (Mw 7.1), the 2001 Atico-Arequipa (Mw 8.4) and the 2007 Pisco (Mw 7.9) earthquakes. GPS measurements obtained between 1994-2001 by IGP-CIW an University of Miami-RSMAS on the central Andes of Peru and Bolivia were used to estimate their coseismic displacements and late stage of interseismic strain accumulation. However, we focus our interest in central Peru-Lima region, which with its about 9’000,000 inhabitants is located over a locked plate interface that has not broken with magnitude Mw 8 earthquakes since May 1940, September 1966 and October 1974. We use a network of 11 GPS monuments to estimate the interseismic velocity field, infer spatial variations of interplate coupling and its relation with the background seismicity of the region.

  8. Megathrust earthquakes in Central Chile: What is next after the Maule 2010 earthquake?

    NASA Astrophysics Data System (ADS)

    Madariaga, R.

    2013-05-01

    The 27 February 2010 Maule earthquake occurred in a well identified gap in the Chilean subduction zone. The event has now been studied in detail using both far-field, near field seismic and geodetic data, we will review this information gathered so far. The event broke a region that was much longer along strike than the gap left over from the 1835 Concepcion earthquake, sometimes called the Darwin earthquake because he was in the area when the earthquake occurred and made many observations. Recent studies of contemporary documents by Udias et al indicate that the area broken by the Maule earthquake in 2010 had previously broken by a similar earthquake in 1751, but several events in the magnitude 8 range occurred in the area principally in 1835 already mentioned and, more recently on 1 December 1928 to the North and on 21 May 1960 (1 1/2 days before the big Chilean earthquake of 1960). Currently the area of the 2010 earthquake and the region immediately to the North is undergoing a very large increase in seismicity with numerous clusters of seismicity that move along the plate interface. Examination of the seismicity of Chile of the 18th and 19th century show that the region immediately to the North of the 2010 earthquake broke in a very large megathrust event in July 1730. this is the largest known earthquake in central Chile. The region where this event occurred has broken in many occasions with M 8 range earthquakes in 1822, 1880, 1906, 1971 and 1985. Is it preparing for a new very large megathrust event? The 1906 earthquake of Mw 8.3 filled the central part of the gap but it has broken again on several occasions in 1971, 1973 and 1985. The main question is whether the 1906 earthquake relieved enough stresses from the 1730 rupture zone. Geodetic data shows that most of the region that broke in 1730 is currently almost fully locked from the northern end of the Maule earthquake at 34.5°S to 30°S, near the southern end of the of the Mw 8.5 Atacama earthquake of 11

  9. Earthquake hazards on the cascadia subduction zone.

    PubMed

    Heaton, T H; Hartzell, S H

    1987-04-10

    Large subduction earthquakes on the Cascadia subduction zone pose a potential seismic hazard. Very young oceanic lithosphere (10 million years old) is being subducted beneath North America at a rate of approximately 4 centimeters per year. The Cascadia subduction zone shares many characteristics with subduction zones in southern Chile, southwestern Japan, and Colombia, where comparably young oceanic lithosphere is also subducting. Very large subduction earthquakes, ranging in energy magnitude (M(w)) between 8 and 9.5, have occurred along these other subduction zones. If the Cascadia subduction zone is also storing elastic energy, a sequence of several great earthquakes (M(w) 8) or a giant earthquake (M(w) 9) would be necessary to fill this 1200-kilometer gap. The nature of strong ground motions recorded during subduction earthquakes of M(w) less than 8.2 is discussed. Strong ground motions from even larger earthquakes (M(w) up to 9.5) are estimated by simple simulations. If large subduction earthquakes occur in the Pacific Northwest, relatively strong shaking can be expected over a large region. Such earthquakes may also be accompanied by large local tsunamis.

  10. Issues on the Japanese Earthquake Hazard Evaluation

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Fukushima, Y.; Sagiya, T.

    2013-12-01

    The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in

  11. Understanding continental megathrust earthquake potential through geological mountain building processes: an example in Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Zhang, Zhen; Wang, Liangshu; Leroy, Yves; shi, Yaolin

    2017-04-01

    How to reconcile continent megathrust earthquake characteristics, for instances, mapping the large-great earthquake sequences into geological mountain building process, as well as partitioning the seismic-aseismic slips, is fundamental and unclear. Here, we scope these issues by focusing a typical continental collisional belt, the great Nepal Himalaya. We first prove that refined Nepal Himalaya thrusting sequences, with accurately defining of large earthquake cycle scale, provide new geodynamical hints on long-term earthquake potential in association with, either seismic-aseismic slip partition up to the interpretation of the binary interseismic coupling pattern on the Main Himalayan Thrust (MHT), or the large-great earthquake classification via seismic cycle patterns on MHT. Subsequently, sequential limit analysis is adopted to retrieve the detailed thrusting sequences of Nepal Himalaya mountain wedge. Our model results exhibit apparent thrusting concentration phenomenon with four thrusting clusters, entitled as thrusting 'families', to facilitate the development of sub-structural regions respectively. Within the hinterland thrusting family, the total aseismic shortening and the corresponding spatio-temporal release pattern are revealed by mapping projection. Whereas, in the other three families, mapping projection delivers long-term large (M<8)-great (M>8) earthquake recurrence information, including total lifespans, frequencies and large-great earthquake alternation information by identifying rupture distances along the MHT. In addition, this partition has universality in continental-continental collisional orogenic belt with identified interseismic coupling pattern, while not applicable in continental-oceanic megathrust context.

  12. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  13. Width of surface rupture zone for thrust earthquakes: implications for earthquake fault zoning

    NASA Astrophysics Data System (ADS)

    Boncio, Paolo; Liberi, Francesca; Caldarella, Martina; Nurminen, Fiia-Charlotta

    2018-01-01

    The criteria for zoning the surface fault rupture hazard (SFRH) along thrust faults are defined by analysing the characteristics of the areas of coseismic surface faulting in thrust earthquakes. Normal and strike-slip faults have been deeply studied by other authors concerning the SFRH, while thrust faults have not been studied with comparable attention. Surface faulting data were compiled for 11 well-studied historic thrust earthquakes occurred globally (5.4 ≤ M ≤ 7.9). Several different types of coseismic fault scarps characterize the analysed earthquakes, depending on the topography, fault geometry and near-surface materials (simple and hanging wall collapse scarps, pressure ridges, fold scarps and thrust or pressure ridges with bending-moment or flexural-slip fault ruptures due to large-scale folding). For all the earthquakes, the distance of distributed ruptures from the principal fault rupture (r) and the width of the rupture zone (WRZ) were compiled directly from the literature or measured systematically in GIS-georeferenced published maps. Overall, surface ruptures can occur up to large distances from the main fault ( ˜ 2150 m on the footwall and ˜ 3100 m on the hanging wall). Most of the ruptures occur on the hanging wall, preferentially in the vicinity of the principal fault trace ( > ˜ 50 % at distances < ˜ 250 m). The widest WRZ are recorded where sympathetic slip (Sy) on distant faults occurs, and/or where bending-moment (B-M) or flexural-slip (F-S) fault ruptures, associated with large-scale folds (hundreds of metres to kilometres in wavelength), are present. A positive relation between the earthquake magnitude and the total WRZ is evident, while a clear correlation between the vertical displacement on the principal fault and the total WRZ is not found. The distribution of surface ruptures is fitted with probability density functions, in order to define a criterion to remove outliers (e.g. 90 % probability of the cumulative distribution

  14. Frog Swarms: Earthquake Precursors or False Alarms?

    PubMed Central

    Grant, Rachel A.; Conlan, Hilary

    2013-01-01

    Simple Summary Media reports linking unusual animal behaviour with earthquakes can potentially create false alarms and unnecessary anxiety among people that live in earthquake risk zones. Recently large frog swarms in China and elsewhere have been reported as earthquake precursors in the media. By examining international media reports of frog swarms since 1850 in comparison to earthquake data, it was concluded that frog swarms are naturally occurring dispersal behaviour of juveniles and are not associated with earthquakes. However, the media in seismic risk areas may be more likely to report frog swarms, and more likely to disseminate reports on frog swarms after earthquakes have occurred, leading to an apparent link between frog swarms and earthquakes. Abstract In short-term earthquake risk forecasting, the avoidance of false alarms is of utmost importance to preclude the possibility of unnecessary panic among populations in seismic hazard areas. Unusual animal behaviour prior to earthquakes has been reported for millennia but has rarely been scientifically documented. Recently large migrations or unusual behaviour of amphibians have been linked to large earthquakes, and media reports of large frog and toad migrations in areas of high seismic risk such as Greece and China have led to fears of a subsequent large earthquake. However, at certain times of year large migrations are part of the normal behavioural repertoire of amphibians. News reports of “frog swarms” from 1850 to the present day were examined for evidence that this behaviour is a precursor to large earthquakes. It was found that only two of 28 reported frog swarms preceded large earthquakes (Sichuan province, China in 2008 and 2010). All of the reported mass migrations of amphibians occurred in late spring, summer and autumn and appeared to relate to small juvenile anurans (frogs and toads). It was concluded that most reported “frog swarms” are actually normal behaviour, probably caused by

  15. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  16. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    USGS Publications Warehouse

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  17. Heterogeneous slip distribution on faults responsible for large earthquakes: characterization and implications for tsunami modelling

    NASA Astrophysics Data System (ADS)

    Baglione, Enrico; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano

    2017-04-01

    The fact that ruptures on the generating faults of large earthquakes are strongly heterogeneous has been demonstrated over the last few decades by a large number of studies. The effort to retrieve reliable finite-fault models (FFMs) for large earthquakes occurred worldwide, mainly by means of the inversion of different kinds of geophysical data, has been accompanied in the last years by the systematic collection and format homogenisation of the published/proposed FFMs for different earthquakes into specifically conceived databases, such as SRCMOD. The main aim of this study is to explore characteristic patterns of the slip distribution of large earthquakes, by using a subset of the FFMs contained in SRCMOD, covering events with moment magnitude equal or larger than 6 and occurred worldwide over the last 25 years. We focus on those FFMs that exhibit a single and clear region of high slip (i.e. a single asperity), which is found to represent the majority of the events. For these FFMs, it sounds reasonable to best-fit the slip model by means of a 2D Gaussian distributions. Two different methods are used (least-square and highest-similarity) and correspondingly two "best-fit" indexes are introduced. As a result, two distinct 2D Gaussian distributions for each FFM are obtained. To quantify how well these distributions are able to mimic the original slip heterogeneity, we calculate and compare the vertical displacements at the Earth surface in the near field induced by the original FFM slip, by an equivalent uniform-slip model, by a depth-dependent slip model, and by the two "best" Gaussian slip models. The coseismic vertical surface displacement is used as the metric for comparison. Results show that, on average, the best results are the ones obtained with 2D Gaussian distributions based on similarity index fitting. Finally, we restrict our attention to those single-asperity FFMs associated to earthquakes which generated tsunamis. We choose few events for which tsunami

  18. Geospatial cross-correlation analysis of Oklahoma earthquakes and saltwater disposal volume 2011 - 2016

    NASA Astrophysics Data System (ADS)

    Pollyea, R.; Mohammadi, N.; Taylor, J. E.

    2017-12-01

    The annual earthquake rate in Oklahoma increased dramatically between 2009 and 2016, owing in large part to the rapid proliferation of salt water disposal wells associated with unconventional oil and gas recovery. This study presents a geospatial analysis of earthquake occurrence and SWD injection volume within a 68,420 km2 area in north-central Oklahoma between 2011 and 2016. The spatial co-variability of earthquake occurrence and SWD injection volume is analyzed for each year of the study by calculating the geographic centroid for both earthquake epicenter and volume-weighted well location. In addition, the spatial cross correlation between earthquake occurrence and SWD volume is quantified by calculating the cross semivariogram annually for a 9.6 km × 9.6 km (6 mi × 6 mi) grid over the study area. Results from these analyses suggest that the relationship between volume-weighted well centroids and earthquake centroids generally follow pressure diffusion space-time scaling, and the volume-weighted well centroid predicts the geographic earthquake centroid within a 1σ radius of gyration. The cross semivariogram calculations show that SWD injection volume and earthquake occurrence are spatially cross correlated between 2014 and 2016. These results also show that the strength of cross correlation decreased from 2015 to 2016; however, the cross correlation length scale remains unchanged at 125 km. This suggests that earthquake mitigation efforts have been moderately successful in decreasing the strength of cross correlation between SWD volume and earthquake occurrence near-field, but the far-field contribution of SWD injection volume to earthquake occurrence remains unaffected.

  19. Association between earthquake events and cholera outbreaks: a cross-country 15-year longitudinal analysis.

    PubMed

    Sumner, Steven A; Turner, Elizabeth L; Thielman, Nathan M

    2013-12-01

    Large earthquakes can cause population displacement, critical sanitation infrastructure damage, and increased threats to water resources, potentially predisposing populations to waterborne disease epidemics such as cholera. Problem The risk of cholera outbreaks after earthquake disasters remains uncertain. A cross-country analysis of World Health Organization (WHO) cholera data that would contribute to this discussion has yet to be published. A cross-country longitudinal analysis was conducted among 63 low- and middle-income countries from 1995-2009. The association between earthquake disasters of various effect sizes and a relative spike in cholera rates for a given country was assessed utilizing fixed-effects logistic regression and adjusting for gross domestic product per capita, water and sanitation level, flooding events, percent urbanization, and under-five child mortality. Also, the association between large earthquakes and cholera rate increases of various degrees was assessed. Forty-eight of the 63 countries had at least one year with reported cholera infections during the 15-year study period. Thirty-six of these 48 countries had at least one earthquake disaster. In adjusted analyses, country-years with ≥10,000 persons affected by an earthquake had 2.26 times increased odds (95 CI, 0.89-5.72, P = .08) of having a greater than average cholera rate that year compared to country-years having <10,000 individuals affected by an earthquake. The association between large earthquake disasters and cholera infections appeared to weaken as higher levels of cholera rate increases were tested. A trend of increased risk of greater than average cholera rates when more people were affected by an earthquake in a country-year was noted. However these findings did not reach statistical significance at traditional levels and may be due to chance. Frequent large-scale cholera outbreaks after earthquake disasters appeared to be relatively uncommon.

  20. Imaging the distribution of transient viscosity after the 2016 Mw 7.1 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Moore, James D. P.; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valère; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Shibazaki, Bunichiro

    2017-04-01

    The deformation of mantle and crustal rocks in response to stress plays a crucial role in the distribution of seismic and volcanic hazards, controlling tectonic processes ranging from continental drift to earthquake triggering. However, the spatial variation of these dynamic properties is poorly understood as they are difficult to measure. We exploited the large stress perturbation incurred by the 2016 earthquake sequence in Kumamoto, Japan, to directly image localized and distributed deformation. The earthquakes illuminated distinct regions of low effective viscosity in the lower crust, notably beneath the Mount Aso and Mount Kuju volcanoes, surrounded by larger-scale variations of viscosity across the back-arc. This study demonstrates a new potential for geodesy to directly probe rock rheology in situ across many spatial and temporal scales.

  1. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the

  2. Managing large-scale workflow execution from resource provisioning to provenance tracking: The CyberShake example

    USGS Publications Warehouse

    Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.

    2006-01-01

    This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.

  3. Scientific, Engineering, and Financial Factors of the 1989 Human-Triggered Newcastle Earthquake in Australia

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2006-12-01

    This presentation emphasizes the dualism of natural resources exploitation and economic growth versus geomechanical pollution and risks of human-triggered earthquakes. Large-scale geoengineering activities, e.g., mining, reservoir impoundment, oil/gas production, water exploitation or fluid injection, alter pre-existing lithostatic stress states in the earth's crust and are anticipated to trigger earthquakes. Such processes of in- situ stress alteration are termed geomechanical pollution. Moreover, since the 19th century more than 200 earthquakes have been documented worldwide with a seismic moment magnitude of 4.5earthquakes increased rapidly. An example of a human-triggered earthquake is the 1989 Newcastle event in Australia that was a result of almost 200 years of coal mining and water over-exploitation, respectively. This earthquake, an Mw=5.6 event, caused more than 3.5 billion U.S. dollars in damage (1989 value) and was responsible for Australia's first and only to date earthquake fatalities. It is therefore thought that, the Newcastle region tends to develop unsustainably if comparing economic growth due to mining and financial losses of triggered earthquakes. An hazard assessment, based on a geomechanical crust model, shows that only four deep coal mines were responsible for triggering this severe earthquake. A small-scale economic risk assessment identifies that the financial loss due to earthquake damage has reduced mining profits that have been re-invested in the Newcastle region for over two centuries beginning in 1801. Furthermore, large-scale economic risk assessment reveals that the financial loss is equivalent to 26% of the Australian Gross Domestic Product (GDP) growth in 1988/89. These costs account for 13% of the total costs of all natural disasters (e.g., flooding, drought, wild fires) and 94% of the costs of all

  4. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  5. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  6. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.

  7. Continuing Megathrust Earthquake Potential in northern Chile after the 2014 Iquique Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Herman, M. W.; Barnhart, W. D.; Furlong, K. P.; Riquelme, S.; Benz, H.; Bergman, E.; Barrientos, S. E.; Earle, P. S.; Samsonov, S. V.

    2014-12-01

    The seismic gap theory, which identifies regions of elevated hazard based on a lack of recent seismicity in comparison to other portions of a fault, has successfully explained past earthquakes and is useful for qualitatively describing where future large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which until recently had not ruptured in a megathrust earthquake since a M~8.8 event in 1877. On April 1 2014, a M 8.2 earthquake occurred within this northern Chile seismic gap, offshore of the city of Iquique; the size and spatial extent of the rupture indicate it was not the earthquake that had been anticipated. Here, we present a rapid assessment of the seismotectonics of the March-April 2014 seismic sequence offshore northern Chile, including analyses of earthquake (fore- and aftershock) relocations, moment tensors, finite fault models, moment deficit calculations, and cumulative Coulomb stress transfer calculations over the duration of the sequence. This ensemble of information allows us to place the current sequence within the context of historic seismicity in the region, and to assess areas of remaining and/or elevated hazard. Our results indicate that while accumulated strain has been released for a portion of the northern Chile seismic gap, significant sections have not ruptured in almost 150 years. These observations suggest that large-to-great sized megathrust earthquakes will occur north and south of the 2014 Iquique sequence sooner than might be expected had the 2014 events ruptured the entire seismic gap.

  8. Hazard Assessment and Early Warning of Tsunamis: Lessons from the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2012-12-01

    . Tsunami hazard assessments or long-term forecast of earthquakes have not considered such a triggering or simultaneous occurrence of different types of earthquakes. The large tsunami at the Fukushima nuclear power station was due to the combination of the deep and shallow slip. Disaster prevention for low-frequency but large-scale hazard must be considered. The Japanese government established a general policy to for two levels: L1 and L2. The L2 tsunamis are the largest possible tsunamis with low frequency of occurrence, but cause devastating disaster once they occur. For such events, saving people's lives is the first priority and soft measures such as tsunami hazard maps, evacuation facilities or disaster education will be prepared. The L1 tsunamis are expected to occur more frequently, typically once in a few decades, for which hard countermeasures such as breakwater must be prepared to protect lives and properties of residents as well as economic and industrial activities.

  9. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  10. Far-field pressurization likely caused one of the largest injection induced earthquakes by reactivating a large pre-existing basement fault structure

    USGS Publications Warehouse

    Yeck, William; Weingarten, Matthew; Benz, Harley M.; McNamara, Daniel E.; Bergman, E.; Herrmann, R.B; Rubinstein, Justin L.; Earle, Paul

    2016-01-01

    The Mw 5.1 Fairview, Oklahoma, earthquake on 13 February 2016 and its associated seismicity produced the largest moment release in the central and eastern United States since the 2011 Mw 5.7 Prague, Oklahoma, earthquake sequence and is one of the largest earthquakes potentially linked to wastewater injection. This energetic sequence has produced five earthquakes with Mw 4.4 or larger. Almost all of these earthquakes occur in Precambrian basement on a partially unmapped 14 km long fault. Regional injection into the Arbuckle Group increased approximately sevenfold in the 36 months prior to the start of the sequence (January 2015). We suggest far-field pressurization from clustered, high-rate wells greater than 12 km from this sequence induced these earthquakes. As compared to the Fairview sequence, seismicity is diffuse near high-rate wells, where pressure changes are expected to be largest. This points to the critical role that preexisting faults play in the occurrence of large induced earthquakes.

  11. Mechanical and Statistical Evidence of Human-Caused Earthquakes - A Global Data Analysis

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2012-12-01

    The causality of large-scale geoengineering activities and the occurrence of earthquakes with magnitudes of up to M=8 is discussed and mechanical and statistical evidence is provided. The earthquakes were caused by artificial water reservoir impoundments, underground and open-pit mining, coastal management, hydrocarbon production and fluid injections/extractions. The presented global earthquake catalog has been recently published in the Journal of Seismology and is available for the public at www.cdklose.com. The data show evidence that geomechanical relationships exist with statistical significance between a) seismic moment magnitudes of observed earthquakes, b) anthropogenic mass shifts on the Earth's crust, and c) lateral distances of the earthquake hypocenters to the locations of the mass shifts. Research findings depend on uncertainties, in particular, of source parameter estimations of seismic events before instrumental recoding. First analyses, however, indicate that that small- to medium size earthquakes (large-size events (>M6) tend to be triggered. The rupture propagation of triggered events might be dominated by pre-existing tectonic stress conditions. Besides event specific evidence, large earthquakes such as China's 2008 M7.9 Wenchuan earthquake fall into a global pattern and can not be considered as outliers or simply seen as an act of god. Observations also indicate that every second seismic event tends to occur after a decade, while pore pressure diffusion seems to only play a role when injecting fluids deep underground. The chance of an earthquake to nucleate after two or 20 years near an area with a significant mass shift is 25% or 75% respectively. Moreover, causative effects of seismic activities highly depend on the tectonic stress regime in the Earth's crust in which geoengineering takes place.

  12. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  13. Evidence for large earthquakes on the San Andreas fault at the Wrightwood, California paleoseismic site: A.D. 500 to present

    USGS Publications Warehouse

    Fumal, T.E.; Weldon, R.J.; Biasi, G.P.; Dawson, T.E.; Seitz, G.G.; Frost, W.T.; Schwartz, D.P.

    2002-01-01

    We present structural and stratigraphic evidence from a paleoseismic site near Wrightwood, California, for 14 large earthquakes that occurred on the southern San Andreas fault during the past 1500 years. In a network of 38 trenches and creek-bank exposures, we have exposed a composite section of interbedded debris flow deposits and thin peat layers more than 24 m thick; fluvial deposits occur along the northern margin of the site. The site is a 150-m-wide zone of deformation bounded on the surface by a main fault zone along the northwest margin and a secondary fault zone to the southwest. Evidence for most of the 14 earthquakes occurs along structures within both zones. We identify paleoearthquake horizons using infilled fissures, scarps, multiple rupture terminations, and widespread folding and tilting of beds. Ages of stratigraphic units and earthquakes are constrained by historic data and 72 14C ages, mostly from samples of peat and some from plant fibers, wood, pine cones, and charcoal. Comparison of the long, well-resolved paleoseimic record at Wrightwood with records at other sites along the fault indicates that rupture lengths of past earthquakes were at least 100 km long. Paleoseismic records at sites in the Coachella Valley suggest that each of the past five large earthquakes recorded there ruptured the fault at least as far northwest as Wrightwood. Comparisons with event chronologies at Pallett Creek and sites to the northwest suggests that approximately the same part of the fault that ruptured in 1857 may also have failed in the early to mid-sixteenth century and several other times during the past 1200 years. Records at Pallett Creek and Pitman Canyon suggest that, in addition to the 14 earthquakes we document, one and possibly two other large earthquakes ruptured the part of the fault including Wrightwood since about A.D. 500. These observations and elapsed times that are significantly longer than mean recurrence intervals at Wrightwood and sites to

  14. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    NASA Astrophysics Data System (ADS)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  15. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  16. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    NASA Astrophysics Data System (ADS)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  17. What caused a large number of fatalities in the Tohoku earthquake?

    NASA Astrophysics Data System (ADS)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  18. Holocene behavior of the Brigham City segment: implications for forecasting the next large-magnitude earthquake on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Personius, Stephen F.; DuRoss, Christopher B.; Crone, Anthony J.

    2012-01-01

    The Brigham City segment (BCS), the northernmost Holocene‐active segment of the Wasatch fault zone (WFZ), is considered a likely location for the next big earthquake in northern Utah. We refine the timing of the last four surface‐rupturing (~Mw 7) earthquakes at several sites near Brigham City (BE1, 2430±250; BE2, 3490±180; BE3, 4510±530; and BE4, 5610±650 cal yr B.P.) and calculate mean recurrence intervals (1060–1500  yr) that are greatly exceeded by the elapsed time (~2500  yr) since the most recent surface‐rupturing earthquake (MRE). An additional rupture observed at the Pearsons Canyon site (PC1, 1240±50 cal yr B.P.) near the southern segment boundary is probably spillover rupture from a large earthquake on the adjacent Weber segment. Our seismic moment calculations show that the PC1 rupture reduced accumulated moment on the BCS about 22%, a value that may have been enough to postpone the next large earthquake. However, our calculations suggest that the segment currently has accumulated more than twice the moment accumulated in the three previous earthquake cycles, so we suspect that additional interactions with the adjacent Weber segment contributed to the long elapse time since the MRE on the BCS. Our moment calculations indicate that the next earthquake is not only overdue, but could be larger than the previous four earthquakes. Displacement data show higher rates of latest Quaternary slip (~1.3  mm/yr) along the southern two‐thirds of the segment. The northern third likely has experienced fewer or smaller ruptures, which suggests to us that most earthquakes initiate at the southern segment boundary.

  19. Impact of a Large San Andreas Fault Earthquake on Tall Buildings in Southern California

    NASA Astrophysics Data System (ADS)

    Krishnan, S.; Ji, C.; Komatitsch, D.; Tromp, J.

    2004-12-01

    In 1857, an earthquake of magnitude 7.9 occurred on the San Andreas fault, starting at Parkfield and rupturing in a southeasterly direction for more than 300~km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. The strong shaking in the basins due to this earthquake would have had a significant long-period content (2--8~s). If such motions were to happen today, they could have a serious impact on tall buildings in Southern California. In order to study the effects of large San Andreas fault earthquakes on tall buildings in Southern California, we use the finite source of the magnitude 7.9 2001 Denali fault earthquake in Alaska and map it onto the San Andreas fault with the rupture originating at Parkfield and proceeding southward over a distance of 290~km. Using the SPECFEM3D spectral element seismic wave propagation code, we simulate a Denali-like earthquake on the San Andreas fault and compute ground motions at sites located on a grid with a 2.5--5.0~km spacing in the greater Southern California region. We subsequently analyze 3D structural models of an existing tall steel building designed in 1984 as well as one designed according to the current building code (Uniform Building Code, 1997) subjected to the computed ground motion. We use a sophisticated nonlinear building analysis program, FRAME3D, that has the ability to simulate damage in buildings due to three-component ground motion. We summarize the performance of these structural models on contour maps of carefully selected structural performance indices. This study could benefit the city in laying out emergency response strategies in the event of an earthquake on the San Andreas fault, in undertaking appropriate retrofit measures for tall buildings, and in formulating zoning regulations for new construction. In addition, the study would provide risk data associated with existing and new construction to insurance companies, real estate developers, and

  20. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  1. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  2. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  3. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  4. Gradual decay of elevated landslide rates after a large earthquake in the Finisterre Mountains, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Hovius, N.; Marc, O.

    2013-12-01

    Large earthquakes can cause widespread mass wasting and landslide rates can stay high after a seismic event. The rate of decay of seismically enhanced mass wasting determines the total erosional effect of an earthquake. It is also an important term in the post-seismic redevelopment of epicentral areas. Using a time series of Landsat images spanning 1990-2010, we have determined the evolution of landslide rates in the western Finisterre Mountains, Papua New Guinea. There, two earthquakes with Mw 6.7and 6.9 occurred at depth of about 20 km on the range-bounding Ramu-Markam fault in 1993. These earthquakes triggered landslides with a total volume of about 0.15 km3. Landslide rates were up to four orders of magnitude higher after the earthquakes than in preceding years, decaying to background values over a period of 2-3 years. Due to this short decay time, seismically induced landslides added only 5% to the volume of co-seismic landslides. This contrasts with another well-documented example, the 1999 Chi-Chi earthquake in Taiwan, where post-seismic landsliding may have increased the total eroded volume by a factor 3-5. In the Finisterre case, landslide rates may have been slightly less than normal for up to a decade after the decay period, but this effect is partially obscured by the impact of a smaller earthquake in 1997. Regardless, the rate of decay of landslide incidence was unrelated to both the seismic moment release in aftershocks and local precipitation. A control on this decay rate has not yet been identified.

  5. Tectonic context of moderate to large historical earthquakes in the Lesser Antilles and mechanical coupling with volcanoes

    NASA Astrophysics Data System (ADS)

    Feuillet, Nathalie; Beauducel, FrançOis; Tapponnier, Paul

    2011-10-01

    The oblique convergence between North American and Caribbean plates is accommodated in a bookshelf faulting manner by active, oblique-normal faults in the northern part of the Lesser Antilles arc. In the last 20 years, two M > 6 earthquakes occurred along a large, arc parallel, en echelon fault system, the 16 March 1985 in Redonda and 21 November 2004 in Les Saintes. A better understanding of active faulting in this region permit us to review the location and magnitude of historical earthquakes by using a regional seismic attenuation law. Several others moderate earthquakes may have occurred along the en echelon fault system implying a strong seismic hazard along the arc. These faults control the effusion of volcanic products and some earthquakes seem to be correlated in time with volcanic unrest. Shallow earthquakes on intraplate faults induced normal stress and pressure changes around neighboring volcano and may have triggered volcanic activity. The Redonda earthquake could have initiated the 1995 eruption of Montserrat's Soufrière Hills by compressing its plumbing system. Conversely, pressure changes under the volcano increased Coulomb stress changes and brought some faults closer to failure, promoting seismicity. We also discuss the magnitude of the largest 11 January 1839 and 8 February 1843 megathrust interplate earthquakes. We calculate that they have increased the stress on some overriding intraplate faults and the extensional strain beneath several volcanoes. This may explain an increase of volcanic and seismic activity in the second half of the 19th century culminating with the devastating, 1902 Mount Pelée eruption.

  6. Particle precipitation prior to large earthquakes of both the Sumatra and Philippine Regions: A statistical analysis

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano

    2015-12-01

    A study of statistical correlation between low L-shell electrons precipitating into the atmosphere and strong earthquakes is presented. More than 11 years of the Medium Energy Protons Electrons Detector data from the NOAA-15 Sun-synchronous polar orbiting satellite were analysed. Electron fluxes were analysed using a set of adiabatic coordinates. From this, significant electron counting rate fluctuations were evidenced during geomagnetic quiet periods. Electron counting rates were compared to earthquakes by defining a seismic event L-shell obtained radially projecting the epicentre geographical positions to a given altitude towards the zenith. Counting rates were grouped in every satellite semi-orbit together with strong seismic events and these were chosen with the L-shell coordinates close to each other. NOAA-15 electron data from July 1998 to December 2011 were compared for nearly 1800 earthquakes with magnitudes larger than or equal to 6, occurring worldwide. When considering 30-100 keV precipitating electrons detected by the vertical NOAA-15 telescope and earthquake epicentre projections at altitudes greater that 1300 km, a significant correlation appeared where a 2-3 h electron precipitation was detected prior to large events in the Sumatra and Philippine Regions. This was in physical agreement with different correlation times obtained from past studies that considered particles with greater energies. The Discussion below of satellite orbits and detectors is useful for future satellite missions for earthquake mitigation.

  7. Acoustic Emission Patterns and the Transition to Ductility in Sub-Micron Scale Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Ghaffari, H.; Xia, K.; Young, R.

    2013-12-01

    We report observation of a transition from the brittle to ductile regime in precursor events from different rock materials (Granite, Sandstone, Basalt, and Gypsum) and Polymers (PMMA, PTFE and CR-39). Acoustic emission patterns associated with sub-micron scale laboratory earthquakes are mapped into network parameter spaces (functional damage networks). The sub-classes hold nearly constant timescales, indicating dependency of the sub-phases on the mechanism governing the previous evolutionary phase, i.e., deformation and failure of asperities. Based on our findings, we propose that the signature of the non-linear elastic zone around a crack tip is mapped into the details of the evolutionary phases, supporting the formation of a strongly weak zone in the vicinity of crack tips. Moreover, we recognize sub-micron to micron ruptures with signatures of 'stiffening' in the deformation phase of acoustic-waveforms. We propose that the latter rupture fronts carry critical rupture extensions, including possible dislocations faster than the shear wave speed. Using 'template super-shear waveforms' and their network characteristics, we show that the acoustic emission signals are possible super-shear or intersonic events. Ref. [1] Ghaffari, H. O., and R. P. Young. "Acoustic-Friction Networks and the Evolution of Precursor Rupture Fronts in Laboratory Earthquakes." Nature Scientific reports 3 (2013). [2] Xia, Kaiwen, Ares J. Rosakis, and Hiroo Kanamori. "Laboratory earthquakes: The sub-Rayleigh-to-supershear rupture transition." Science 303.5665 (2004): 1859-1861. [3] Mello, M., et al. "Identifying the unique ground motion signatures of supershear earthquakes: Theory and experiments." Tectonophysics 493.3 (2010): 297-326. [4] Gumbsch, Peter, and Huajian Gao. "Dislocations faster than the speed of sound." Science 283.5404 (1999): 965-968. [5] Livne, Ariel, et al. "The near-tip fields of fast cracks." Science 327.5971 (2010): 1359-1363. [6] Rycroft, Chris H., and Eran Bouchbinder

  8. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  9. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  10. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  11. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  12. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a

  13. Where and why do large shallow intraslab earthquakes occur?

    NASA Astrophysics Data System (ADS)

    Seno, Tetsuzo; Yoshida, Masaki

    2004-03-01

    We try to find how often, and in what regions large earthquakes ( M≥7.0) occur within the shallow portion (20-60 km depth) of a subducting slab. Searching for events in published individual studies and the Harvard University centroid moment tensor catalogue, we find twenty such events in E. Hokkaido, Kyushu-SW, Japan, S. Mariana, Manila, Sumatra, Vanuatu, N. Chile, C. Peru, El Salvador, Mexico, N. Cascadia and Alaska. Slab stresses revealed from the mechanism solutions of these large intraslab events and nearby smaller events are almost always down-dip tensional. Except for E. Hokkaido, Manila, and Sumatra, the upper plate shows horizontal stress gradient in the arc-perpendicular direction. We infer that shear tractions are operating at the base of the upper plate in this direction to produce the observed gradient and compression in the outer fore-arc, balancing the down-dip tensional stress of the slab. This tectonic situation in the subduction zone might be realized as part of the convection system with some conditions, as shown by previous numerical simulations.

  14. Overestimation of the earthquake hazard along the Himalaya: constraints in bracketing of medieval earthquakes from paleoseismic studies

    NASA Astrophysics Data System (ADS)

    Arora, Shreya; Malik, Javed N.

    2017-12-01

    The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

  15. Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations

    USGS Publications Warehouse

    Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.

    2017-01-01

    Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.

  16. Observing Triggered Earthquakes Across Iran with Calibrated Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Karasozen, E.; Bergman, E.; Ghods, A.; Nissen, E.

    2016-12-01

    We investigate earthquake triggering phenomena in Iran by analyzing patterns of aftershock activity around mapped surface ruptures. Iran has an intense level of seismicity (> 40,000 events listed in the ISC Bulletin since 1960) due to it accommodating a significant portion of the continental collision between Arabia and Eurasia. There are nearly thirty mapped surface ruptures associated with earthquakes of M 6-7.5, mostly in eastern and northwestern Iran, offering a rich potential to study the kinematics of earthquake nucleation, rupture propagation, and subsequent triggering. However, catalog earthquake locations are subject to up to 50 km of location bias from the combination of unknown Earth structure and unbalanced station coverage, making it challenging to assess both the rupture directivity of larger events and the spatial patterns of their aftershocks. To overcome this limitation, we developed a new two-tiered multiple-event relocation approach to obtain hypocentral parameters that are minimally biased and have realistic uncertainties. In the first stage, locations of small clusters of well-recorded earthquakes at local spatial scales (100s of events across 100 km length scales) are calibrated either by using near-source arrival times or independent location constraints (e.g. local aftershock studies, InSAR solutions), using an implementation of the Hypocentroidal Decomposition relocation technique called MLOC. Epicentral uncertainties are typically less than 5 km. Then, these events are used as prior constraints in the code BayesLoc, a Bayesian relocation technique that can handle larger datasets, to yield region-wide calibrated hypocenters (1000s of events over 1000 km length scales). With locations and errors both calibrated, the pattern of aftershock activity can reveal the type of the earthquake triggering: dynamic stress changes promote an increase in the seismicity rate in the direction of unilateral propagation, whereas static stress changes should

  17. Detection of change points in underlying earthquake rates, with application to global mega-earthquakes

    NASA Astrophysics Data System (ADS)

    Touati, Sarah; Naylor, Mark; Main, Ian

    2016-02-01

    The recent spate of mega-earthquakes since 2004 has led to speculation of an underlying change in the global `background' rate of large events. At a regional scale, detecting changes in background rate is also an important practical problem for operational forecasting and risk calculation, for example due to volcanic processes, seismicity induced by fluid injection or withdrawal, or due to redistribution of Coulomb stress after natural large events. Here we examine the general problem of detecting changes in background rate in earthquake catalogues with and without correlated events, for the first time using the Bayes factor as a discriminant for models of varying complexity. First we use synthetic Poisson (purely random) and Epidemic-Type Aftershock Sequence (ETAS) models (which also allow for earthquake triggering) to test the effectiveness of many standard methods of addressing this question. These fall into two classes: those that evaluate the relative likelihood of different models, for example using Information Criteria or the Bayes Factor; and those that evaluate the probability of the observations (including extreme events or clusters of events) under a single null hypothesis, for example by applying the Kolmogorov-Smirnov and `runs' tests, and a variety of Z-score tests. The results demonstrate that the effectiveness among these tests varies widely. Information Criteria worked at least as well as the more computationally expensive Bayes factor method, and the Kolmogorov-Smirnov and runs tests proved to be the relatively ineffective in reliably detecting a change point. We then apply the methods tested to events at different thresholds above magnitude M ≥ 7 in the global earthquake catalogue since 1918, after first declustering the catalogue. This is most effectively done by removing likely correlated events using a much lower magnitude threshold (M ≥ 5), where triggering is much more obvious. We find no strong evidence that the background rate of large

  18. Self-similar rupture implied by scaling properties of volcanic earthquakes occurring during the 2004-2008 eruption of Mount St. Helens, Washington

    USGS Publications Warehouse

    Harrington, Rebecca M.; Kwiatek, Grzegorz; Moran, Seth C.

    2015-01-01

    We analyze a group of 6073 low-frequency earthquakes recorded during a week-long temporary deployment of broadband seismometers at distances of less than 3 km from the crater at Mount St. Helens in September of 2006. We estimate the seismic moment (M0) and spectral corner frequency (f0) using a spectral ratio approach for events with a high signal-to-noise (SNR) ratio that have a cross-correlation coefficient of 0.8 or greater with at least five other events. A cluster analysis of cross-correlation values indicates that the group of 421 events meeting the SNR and cross-correlation criteria forms eight event families that exhibit largely self-similar scaling. We estimate the M0 and f0 values of the 421 events and calculate their static stress drop and scaled energy (ER/M0) values. The estimated values suggest self-similar scaling within families, as well as between five of eight families (i.e.,  and  constant). We speculate that differences in scaled energy values for the two families with variable scaling may result from a lack of resolution in the velocity model. The observation of self-similar scaling is the first of its kind for such a large group of low-frequency volcanic tectonic events occurring during a single active dome extrusion eruption.

  19. ViscoSim Earthquake Simulator

    USGS Publications Warehouse

    Pollitz, Fred

    2012-01-01

    Synthetic seismicity simulations have been explored by the Southern California Earthquake Center (SCEC) Earthquake Simulators Group in order to guide long‐term forecasting efforts related to the Unified California Earthquake Rupture Forecast (Tullis et al., 2012a). In this study I describe the viscoelastic earthquake simulator (ViscoSim) of Pollitz, 2009. Recapitulating to a large extent material previously presented by Pollitz (2009, 2011) I describe its implementation of synthetic ruptures and how it differs from other simulators being used by the group.

  20. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  1. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  2. Investigation of ionospheric precursors related to deep and intermediate earthquakes based on spectral and statistical analysis

    NASA Astrophysics Data System (ADS)

    Oikonomou, Christina; Haralambous, Haris; Muslim, Buldan

    2017-01-01

    Ionospheric TEC (Total Electron Content) variations prior to the deep (≈600 km) earthquake doublet close to magnetic equator in Peru (M = 7.6) and to the intermediate (≈200 km) earthquake in Afghanistan (M = 7.5) during 2015 were investigated using measurements from Global Navigation Satellite System (GNSS) network with the aim to detect possible ionospheric precursors of these events. For this we applied both statistical and spectral analysis. Ionospheric anomalies related to both earthquakes were observed few hours and few days prior to the earthquakes during daytime localized mainly near the epicenter. These were large-scale positive TEC anomalies and small-scale TEC oscillations with periods of 20 min and duration around 2-4 h appearing at the same local time each day. Several days prior to the earthquake in Peru a significant phenomenon was observed during afternoon time related to the modification of the Equatorial Ionization Anomaly (EIA) structure. During nighttime, however, it was not possible to identify any ionospheric earthquake precursor due to the concurrence of various phenomena, such as Equatorial Plasma Bubbles and pre- and post-midnight TEC peaks prior to Peru earthquake, and solar terminator transition prior to both earthquakes which could induce resembling ionospheric anomalies.

  3. The 2008 M7.9 Wenchuan earthquake - a human-caused event

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2013-12-01

    A catalog of global human-caused earthquakes shows statistical evidence that the triggering of earthquakes by large-scale geoengineering activities depends on geological and tectonic constrains (in Klose 2013). Such geoengineering activities also include the filling of water reservoirs. This presentation illuminates mechanical and statistical aspects of the 2008 M7.9 Wenchuan earthquake in light of the hypothesis of being NOT human-caused. However, available data suggest that the Wenchuan earthquake was triggered by the filling of the Zipungpu water reservoir 30 months prior to the mainshock. The reservoir spatially extended parallel and near to the main Beichuan fault zone in a highly stressed reverse fault regime. It is mechanically evident that reverse faults tend to be very trigger-sensitive due to mass shifts (static loads) that occur on the surface of the Earth's crust. These circumstances made a triggering of a seismic event of this magnitude at this location possible (in Klose 2008, 2012). The data show that the Wenchuan earthquake is not an outlier. From a statistical view point, the earthquake falls into the upper range of the family of reverse fault earthquakes that were caused by humans worldwide.

  4. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  5. Oklahoma experiences largest earthquake during ongoing regional wastewater injection hazard mitigation efforts

    USGS Publications Warehouse

    Yeck, William; Hayes, Gavin; McNamara, Daniel E.; Rubinstein, Justin L.; Barnhart, William; Earle, Paul; Benz, Harley M.

    2017-01-01

    The 3 September 2016, Mw 5.8 Pawnee earthquake was the largest recorded earthquake in the state of Oklahoma. Seismic and geodetic observations of the Pawnee sequence, including precise hypocenter locations and moment tensor modeling, shows that the Pawnee earthquake occurred on a previously unknown left-lateral strike-slip basement fault that intersects the mapped right-lateral Labette fault zone. The Pawnee earthquake is part of an unprecedented increase in the earthquake rate in Oklahoma that is largely considered the result of the deep injection of waste fluids from oil and gas production. If this is, indeed, the case for the M5.8 Pawnee earthquake, then this would be the largest event to have been induced by fluid injection. Since 2015, Oklahoma has undergone wide-scale mitigation efforts primarily aimed at reducing injection volumes. Thus far in 2016, the rate of M3 and greater earthquakes has decreased as compared to 2015, while the cumulative moment—or energy released from earthquakes—has increased. This highlights the difficulty in earthquake hazard mitigation efforts given the poorly understood long-term diffusive effects of wastewater injection and their connection to seismicity.

  6. Coseismic slip of two large Mexican earthquakes from teleseismic body waveforms - Implications for asperity interaction in the Michoacan plate boundary segment

    NASA Astrophysics Data System (ADS)

    Mendoza, Carlos

    1993-05-01

    The distributions and depths of coseismic slip are derived for the October 25, 1981 Playa Azul and September 21, 1985 Zihuatanejo earthquakes in western Mexico by inverting the recorded teleseismic body waves. Rupture during the Playa Azul earthquake appears to have occurred in two separate zones both updip and downdip of the point of initial nucleation, with most of the slip concentrated in a circular region of 15-km radius downdip from the hypocenter. Coseismic slip occurred entirely within the area of reduced slip between the two primary shallow sources of the Michoacan earthquake that occurred on September 19, 1985, almost 4 years later. The slip of the Zihuatanejo earthquake was concentrated in an area adjacent to one of the main sources of the Michoacan earthquake and appears to be the southeastern continuation of rupture along the Cocos-North America plate boundary. The zones of maximum slip for the Playa Azul, Zihuatanejo, and Michoacan earthquakes may be considered asperity regions that control the occurrence of large earthquakes along the Michoacan segment of the plate boundary.

  7. Predictors of psychological resilience amongst medical students following major earthquakes.

    PubMed

    Carter, Frances; Bell, Caroline; Ali, Anthony; McKenzie, Janice; Boden, Joseph M; Wilkinson, Timothy; Bell, Caroline

    2016-05-06

    To identify predictors of self-reported psychological resilience amongst medical students following major earthquakes in Canterbury in 2010 and 2011. Two hundred and fifty-three medical students from the Christchurch campus, University of Otago, were invited to participate in an electronic survey seven months following the most severe earthquake. Students completed the Connor-Davidson Resilience Scale, the Depression, Anxiety and Stress Scale, the Post-traumatic Disorder Checklist, the Work and Adjustment Scale, and the Eysenck Personality Questionnaire. Likert scales and other questions were also used to assess a range of variables including demographic and historical variables (eg, self-rated resilience prior to the earthquakes), plus the impacts of the earthquakes. The response rate was 78%. Univariate analyses identified multiple variables that were significantly associated with higher resilience. Multiple linear regression analyses produced a fitted model that was able to explain 35% of the variance in resilience scores. The best predictors of higher resilience were: retrospectively-rated personality prior to the earthquakes (higher extroversion and lower neuroticism); higher self-rated resilience prior to the earthquakes; not being exposed to the most severe earthquake; and less psychological distress following the earthquakes. Psychological resilience amongst medical students following major earthquakes was able to be predicted to a moderate extent.

  8. Magnitude scale for the Central American tsunamis

    NASA Astrophysics Data System (ADS)

    Hatori, Tokutaro

    1995-09-01

    Based on the tsunami data in the Central American region, the regional characteristic of tsunami magnitude scales is discussed in relation to earthquake magnitudes during the period from 1900 to 1993. Tsunami magnitudes on the Imamura-Iida scale of the 1985 Mexico and 1992 Nicaragua tsunamis are determined to be m=2.5, judging from the tsunami height-distance diagram. The magnitude values of the Central American tsunamis are relatively small compared to earthquakes with similar size in other regions. However, there are a few large tsunamis generated by low-frequency earthquakes such as the 1992 Nicaragua earthquake. Inundation heights of these unusual tsunamis are about 10 times higher than those of normal tsunamis for the same earthquake magnitude ( M s =6.9 7.2). The Central American tsunamis having magnitude m>1 have been observed by the Japanese tide stations, but the effect of directivity toward Japan is very small compared to that of the South American tsunamis.

  9. Fatality rates of the M w ~8.2, 1934, Bihar-Nepal earthquake and comparison with the April 2015 Gorkha earthquake

    NASA Astrophysics Data System (ADS)

    Sapkota, Soma Nath; Bollinger, Laurent; Perrier, Frédéric

    2016-03-01

    Large Himalayan earthquakes expose rapidly growing populations of millions of people to high levels of seismic hazards, in particular in northeast India and Nepal. Calibrating vulnerability models specific to this region of the world is therefore crucial to the development of reliable mitigation measures. Here, we reevaluate the >15,700 casualties (8500 in Nepal and 7200 in India) from the M w ~8.2, 1934, Bihar-Nepal earthquake and calculate the fatality rates for this earthquake using an estimation of the population derived from two census held in 1921 and 1942. Values reach 0.7-1 % in the epicentral region, located in eastern Nepal, and 2-5 % in the urban areas of the Kathmandu valley. Assuming a constant vulnerability, we obtain, if the same earthquake would have repeated in 2011, fatalities of 33,000 in Nepal and 50,000 in India. Fast-growing population in India indeed must unavoidably lead to increased levels of casualty compared with Nepal, where the population growth is smaller. Aside from that probably robust fact, extrapolations have to be taken with great caution. Among other effects, building and life vulnerability could depend on population concentration and evolution of construction methods. Indeed, fatalities of the April 25, 2015, M w 7.8 Gorkha earthquake indicated on average a reduction in building vulnerability in urban areas, while rural areas remained highly vulnerable. While effective scaling laws, function of the building stock, seem to describe these differences adequately, vulnerability in the case of an M w >8.2 earthquake remains largely unknown. Further research should be carried out urgently so that better prevention strategies can be implemented and building codes reevaluated on, adequately combining detailed ancient and modern data.

  10. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, Susan E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  11. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  12. Modified Mercalli intensities (MMI) for large earthquakes near New Madrid, Missouri, in 1811-1812 and near Charleston, South Carolina, in 1886

    USGS Publications Warehouse

    Bakun, W.H.; Johnston, A.C.; Hopper, M.G.

    2002-01-01

    Large historical earthquakes occurred in the eastern United States on December 16, 1811 near New Madrid, MO, on January 23, 1812 near New Madrid, MO, on February 7, 1812 near New Madrid, MO, and on September 1, 1886 near Charleston, SC. Modified Mercalli Intensity (MMI) assignments for these earthquakes were used by Bakun et al. (submitted) to estimate the location and moment magnitude M of these earthquakes from MMI observations. The MMI assignments used by Bakun et al. (submitted) are listed in this report.

  13. The 2017/09/08 Mw 8.2 Tehuantepec, Mexico Earthquake: A Large but Compact Dip-Slip Faulting Event Severing the Slab

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Iglesias, A.; Suarez, G.; Santoyo, M. A.; Villafuerte, C. D.; Ji, C.; Franco-Sánchez, S. I.; Singh, S. K.; Cruz-Atienza, V. M.; Ando, R.

    2017-12-01

    The Mw 8.2 September 8 earthquake occurred in the middle of the "Tehuantepec Gap", a segment of the Mexican subduction zone that has no historical mentions of a large earthquake. It was, however, not the expected subduction megathrust earthquake, but rather an intraplate, normal faulting event, in the subducting oceanic Cocos plate. The earthquake rupture initiated at a depth of 50 km and propagated NW on a near-vertical plane, breaking towards the surface. Most of the slip was concentrated in the distance range 30-100 km from the hypocenter and at depth between 15 and 50 km, with maximum slip of 15m. The earthquake seems to have broken the entire lithosphere, estimated to be 35 km thick. The strike of the fault is about 20 degrees oblique to the trench but aligned with the existing fabric on the incoming oceanic plate, suggesting a structural control by preexisting intraslab fractures and activation by the extensional stress due to the slab bending and pulling. Aftershocks occurred along the fault plane during the first day after the event, with activation of other parallel structures within the subducting plate, towards the east, as well as in upper plate, in the following days. Coulomb stress modeling suggests that the stress on the plate interface above the rupture was significantly increased where shallow thrust aftershoks took place, and reduced updip of the earthquake. There are several other examples of large intraslab normal faulting earthquakes, near the downdip edge (1931 Mw 7.8 and 1999 Mw 7.5, Oaxaca) or directly below (1997 Mw 7.1, Michoacan) the coupled plate interface, along the Mexican subduction zone. The possibility of events of similar magnitude to the 2017 earthquake occurring close to the coastline, all along this part of the subduction zone, cannot be ruled out.

  14. Dual Megathrust Slip Behaviors of the 2014 Iquique Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Meng, L.; Huang, H.; Burgmann, R.; Ampuero, J. P.; Strader, A. E.

    2014-12-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A M 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of Northern Chile. This event was preceded by a 2-week-long foreshock sequence including a M 6.7 earthquake. Repeating earthquakes are found among the foreshock sequence that migrated towards the mainshock area, suggesting a large scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence time of repeating earthquakes highlights the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while other repeaters occurred both before and after the mainshock in the area complementary to the mainshock rupture. The spatial and temporal distribution of the repeating earthquakes illustrate the essential role of propagating aseismic slip in leading up to the mainshock and aftershock activities. Various finite fault models indicate that the coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show slow initiation with low amplitude moment rate at low frequency (< 0.1 Hz), while back-projection shows a steady initiation at high frequency (> 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the high-frequency rupture remains within an area of low gravity anomaly, suggesting possible upper-crustal structures that promote high-frequency generation. Back-projection also shows an episode of reverse rupture propagation which suggests a delayed failure of asperities in

  15. What Can Sounds Tell Us About Earthquake Interactions?

    NASA Astrophysics Data System (ADS)

    Aiken, C.; Peng, Z.

    2012-12-01

    It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these

  16. Earthquake Facts

    MedlinePlus

    ... recordings of large earthquakes, scientists built large spring-pendulum seismometers in an attempt to record the long- ... are moving away from one another. The first “pendulum seismoscope” to measure the shaking of the ground ...

  17. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  18. Assessment of Ionospheric Anomaly Prior to the Large Earthquake: 2D and 3D Analysis in Space and Time for the 2011 Tohoku Earthquake (Mw9.0)

    NASA Astrophysics Data System (ADS)

    Hattori, Katsumi; Hirooka, Shinji; Han, Peng

    2016-04-01

    The ionospheric anomalies possibly associated with large earthquakes have been reported by many researchers. In this paper, Total Electron Content (TEC) and tomography analyses have been applied to investigate the spatial and temporal distributions of ionospheric electron density prior to the 2011 Off the Pacific Coast of Tohoku earthquake (Mw9.0). Results show significant TEC enhancements and an interesting three dimensional structure prior to the main shock. As for temporal TEC changes, the TEC value increases 3-4 days before the earthquake remarkably, when the geomagnetic condition was relatively quiet. In addition, the abnormal TEC enhancement area in space was stalled above Japan during the period. Tomographic results show that three dimensional distribution of electron density decreases around 250 km altitude above the epicenter (peak is located just the east-region of the epicenter) and increases the mostly entire region between 300 and 400 km.

  19. Magnitude and intensity: Measures of earthquake size and severity

    USGS Publications Warehouse

    Spall, Henry

    1982-01-01

    Earthquakes can be measured in terms of either the amount of energy they release (magnitude) or the degree of ground shaking they cause at a particular locality (intensity).  Although magnitude and intensity are basically different measures of an earthquake, they are frequently confused by the public and new reports of earthquakes.  Part of the confusion probably arises from the general similarity of scales used express these quantities.  The various magnitude scales represent logarithmic expressions of the energy released by an earthquake.  Magnitude is calculated from the record made by an earthquake on a calibrated seismograph.  There are no upper or lower limits to magnitude, although no measured earthquakes have exceeded magnitude 8.9.

  20. Large Scale Metal Additive Techniques Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environmentmore » friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.« less

  1. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    USGS Publications Warehouse

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  2. High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling of strength on faults

    USGS Publications Warehouse

    Frankel, A.

    1991-01-01

    The high-frequency falloff ??-y of earthquake displacement spectra and the b value of aftershock sequences are attributed to the character of spatially varying strength along fault zones. I assume that the high frequency energy of a main shock is produced by a self-similar distribution of subevents, where the number of subevents with radii greater than R is proportional to R-D, D being the fractal dimension. In the model, an earthquake is composed of a hierarchical set of smaller earthquakes. The static stress drop is parameterized to be proportional to R??, and strength is assumed to be proportional to static stress drop. I find that a distribution of subevents with D = 2 and stress drop independent of seismic moment (?? = 0) produces a main shock with an ??-2 falloff, if the subevent areas fill the rupture area of the main shock. By equating subevents to "islands' of high stress of a random, self-similar stress field on a fault, I relate D to the scaling of strength on a fault, such that D = 2 - ??. Thus D = 2 corresponds to constant stress drop scaling (?? = 0) and scale-invariant fault strength. A self-similar model of aftershock rupture zones on a fault is used to determine the relationship between the b value, the size distribution of aftershock rupture zones, and the scaling of strength on a fault. -from Author

  3. Moment Magnitudes and Local Magnitudes for Small Earthquakes: Implications for Ground-Motion Prediction and b-values

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Hanks, T. C.; Vernon, F.

    2016-12-01

    We illustrate two essential consequences of the systematic difference between moment magnitude and local magnitude for small earthquakes, illuminating the underlying earthquake physics. Moment magnitude, M 2/3 log M0, is uniformly valid for all earthquake sizes [Hanks and Kanamori, 1979]. However, the relationship between local magnitude ML and moment is itself magnitude dependent. For moderate events, 3< M < 7, M and M­L are coincident; for earthquakes smaller than M3, ML log M0 [Hanks and Boore, 1984]. This is a consequence of the saturation of the apparent corner frequency fc as it becoming greater than the largest observable frequency, fmax; In this regime, stress drop no longer controls ground motion. This implies that ML and M differ by a factor of 1.5 for these small events. While this idea is not new, its implications are important as more small-magnitude data are incorporated into earthquake hazard research. With a large dataset of M<3 earthquakes recorded on the ANZA network, we demonstrate striking consequences of the difference between M and ML. ML scales as the log peak ground motions (e.g., PGA or PGV) for these small earthquakes, which yields log PGA log M0 [Boore, 1986]. We plot nearly 15,000 records of PGA and PGV at close stations, adjusted for site conditions and for geometrical spreading to 10 km. The slope of the log of ground motion is 1.0*ML­, or 1.5*M, confirming the relationship, and that fc >> fmax. Just as importantly, if this relation is overlooked, prediction of large-magnitude ground motion from small earthquakes will be misguided. We also consider the effect of this magnitude scale difference on b-value. The oft-cited b-value of 1 should hold for small magnitudes, given M. Use of ML necessitates b=2/3 for the same data set; use of mixed, or unknown, magnitudes complicates the matter further. This is of particular import when estimating the rate of large earthquakes when one has limited data on their recurrence, as is the case for

  4. Tremor, remote triggering and earthquake cycle

    NASA Astrophysics Data System (ADS)

    Peng, Z.

    2012-12-01

    Deep tectonic tremor and episodic slow-slip events have been observed at major plate-boundary faults around the Pacific Rim. These events have much longer source durations than regular earthquakes, and are generally located near or below the seismogenic zone where regular earthquakes occur. Tremor and slow-slip events appear to be extremely stress sensitive, and could be instantaneously triggered by distant earthquakes and solid earth tides. However, many important questions remain open. For example, it is still not clear what are the necessary conditions for tremor generation, and how remote triggering could affect large earthquake cycle. Here I report a global search of tremor triggered by recent large teleseismic earthquakes. We mainly focus on major subduction zones around the Pacific Rim. These include the southwest and northeast Japan subduction zones, the Hikurangi subduction zone in New Zealand, the Cascadia subduction zone, and the major subduction zones in Central and South America. In addition, we examine major strike-slip faults around the Caribbean plate, the Queen Charlotte fault in northern Pacific Northwest Coast, and the San Andreas fault system in California. In each place, we first identify triggered tremor as a high-frequency non-impulsive signal that is in phase with the large-amplitude teleseismic waves. We also calculate the dynamic stress and check the triggering relationship with the Love and Rayleigh waves. Finally, we calculate the triggering potential with the local fault orientation and surface-wave incident angles. Our results suggest that tremor exists at many plate-boundary faults in different tectonic environments, and could be triggered by dynamic stress as low as a few kPas. In addition, we summarize recent observations of slow-slip events and earthquake swarms triggered by large distant earthquakes. Finally, we propose several mechanisms that could explain apparent clustering of large earthquakes around the world.

  5. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  6. Imaging the distribution of transient viscosity after the 2016 Mw 7.1 Kumamoto earthquake.

    PubMed

    Moore, James D P; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valère; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Shibazaki, Bunichiro

    2017-04-14

    The deformation of mantle and crustal rocks in response to stress plays a crucial role in the distribution of seismic and volcanic hazards, controlling tectonic processes ranging from continental drift to earthquake triggering. However, the spatial variation of these dynamic properties is poorly understood as they are difficult to measure. We exploited the large stress perturbation incurred by the 2016 earthquake sequence in Kumamoto, Japan, to directly image localized and distributed deformation. The earthquakes illuminated distinct regions of low effective viscosity in the lower crust, notably beneath the Mount Aso and Mount Kuju volcanoes, surrounded by larger-scale variations of viscosity across the back-arc. This study demonstrates a new potential for geodesy to directly probe rock rheology in situ across many spatial and temporal scales. Copyright © 2017, American Association for the Advancement of Science.

  7. Some comparisons between mining-induced and laboratory earthquakes

    USGS Publications Warehouse

    McGarr, A.

    1994-01-01

    Although laboratory stick-slip friction experiments have long been regarded as analogs to natural crustal earthquakes, the potential use of laboratory results for understanding the earthquake source mechanism has not been fully exploited because of essential difficulties in relating seismographic data to measurements made in the controlled laboratory environment. Mining-induced earthquakes, however, provide a means of calibrating the seismic data in terms of laboratory results because, in contrast to natural earthquakes, the causative forces as well as the hypocentral conditions are known. A comparison of stick-slip friction events in a large granite sample with mining-induced earthquakes in South Africa and Canada indicates both similarities and differences between the two phenomena. The physics of unstable fault slip appears to be largely the same for both types of events. For example, both laboratory and mining-induced earthquakes have very low seismic efficiencies {Mathematical expression} where ??a is the apparent stress and {Mathematical expression} is the average stress acting on the fault plane to cause slip; nearly all of the energy released by faulting is consumed in overcoming friction. In more detail, the mining-induced earthquakes differ from the laboratory events in the behavior of ?? as a function of seismic moment M0. Whereas for the laboratory events ?????0.06 independent of M0, ?? depends quite strongly on M0 for each set of induced earthquakes, with 0.06 serving, apparently, as an upper bound. It seems most likely that this observed scaling difference is due to variations in slip distribution over the fault plane. In the laboratory, a stick-slip event entails homogeneous slip over a fault of fixed area. For each set of induced earthquakes, the fault area appears to be approximately fixed but the slip is inhomogeneous due presumably to barriers (zones of no slip) distributed over the fault plane; at constant {Mathematical expression}, larger

  8. Modelling the elements of country vulnerability to earthquake disasters.

    PubMed

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  9. The Technical Efficiency of Earthquake Medical Rapid Response Teams Following Disasters: The Case of the 2010 Yushu Earthquake in China.

    PubMed

    Liu, Xu; Tang, Bihan; Yang, Hongyang; Liu, Yuan; Xue, Chen; Zhang, Lulu

    2015-12-04

    Performance assessments of earthquake medical rapid response teams (EMRRTs), particularly the first responders deployed to the hardest hit areas following major earthquakes, should consider efficient and effective use of resources. This study assesses the daily technical efficiency of EMRRTs in the emergency period immediately following the 2010 Yushu earthquake in China. Data on EMRRTs were obtained from official daily reports of the general headquarters for Yushu earthquake relief, the emergency office of the National Ministry of Health, and the Health Department of Qinghai Province, for a sample of data on 15 EMRRTs over 62 days. Data envelopment analysis was used to examine the technical efficiency in a constant returns to scale model, a variable returns to scale model, and the scale efficiency of EMRRTs. Tobit regression was applied to analyze the effects of corresponding influencing factors. The average technical efficiency scores under constant returns to scale, variable returns to scale, and the scale efficiency scores of the 62 units of analysis were 77.95%, 89.00%, and 87.47%, respectively. The staff-to-bed ratio was significantly related to global technical efficiency. The date of rescue was significantly related to pure technical efficiency. The type of institution to which an EMRRT belonged and the staff-to-bed ratio were significantly related to scale efficiency. This study provides evidence that supports improvements to EMRRT efficiency and serves as a reference for earthquake emergency medical rapid assistance leaders and teams.

  10. Benefits of Earthquake Early Warning to Large Municipalities (Invited)

    NASA Astrophysics Data System (ADS)

    Featherstone, J.

    2013-12-01

    The City of Los Angeles has been involved in the testing of the Cal Tech Shake Alert, Earthquake Early Warning (EQEW) system, since February 2012. This system accesses a network of seismic monitors installed throughout California. The system analyzes and processes seismic information, and transmits a warning (audible and visual) when an earthquake occurs. In late 2011, the City of Los Angeles Emergency Management Department (EMD) was approached by Cal Tech regarding EQEW, and immediately recognized the value of the system. Simultaneously, EMD was in the process of finalizing a report by a multi-discipline team that visited Japan in December 2011, which spoke to the effectiveness of EQEW for the March 11, 2011 earthquake that struck that country. Information collected by the team confirmed that the EQEW systems proved to be very effective in alerting the population of the impending earthquake. The EQEW in Japan is also tied to mechanical safeguards, such as the stopping of high-speed trains. For a city the size and complexity of Los Angeles, the implementation of a reliable EQEW system will save lives, reduce loss, ensure effective and rapid emergency response, and will greatly enhance the ability of the region to recovery from a damaging earthquake. The current Shake Alert system is being tested at several governmental organizations and private businesses in the region. EMD, in cooperation with Cal Tech, identified several locations internal to the City where the system would have an immediate benefit. These include the staff offices within EMD, the Los Angeles Police Department's Real Time Analysis and Critical Response Division (24 hour crime center), and the Los Angeles Fire Department's Metropolitan Fire Communications (911 Dispatch). All three of these agencies routinely manage the collaboration and coordination of citywide emergency information and response during times of crisis. Having these three key public safety offices connected and included in the

  11. The wireless networking system of Earthquake precursor mobile field observation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Teng, Y.; Wang, X.; Fan, X.; Wang, X.

    2012-12-01

    The mobile field observation network could be real-time, reliably record and transmit large amounts of data, strengthen the physical signal observations in specific regions and specific period, it can improve the monitoring capacity and abnormal tracking capability. According to the features of scatter everywhere, a large number of current earthquake precursor observation measuring points, networking technology is based on wireless broadband accessing McWILL system, the communication system of earthquake precursor mobile field observation would real-time, reliably transmit large amounts of data to the monitoring center from measuring points through the connection about equipment and wireless accessing system, broadband wireless access system and precursor mobile observation management center system, thereby implementing remote instrument monitoring and data transmition. At present, the earthquake precursor field mobile observation network technology has been applied to fluxgate magnetometer array geomagnetic observations of Tianzhu, Xichang,and Xinjiang, it can be real-time monitoring the working status of the observational instruments of large area laid after the last two or three years, large scale field operation. Therefore, it can get geomagnetic field data of the local refinement regions and provide high-quality observational data for impending earthquake tracking forecast. Although, wireless networking technology is very suitable for mobile field observation with the features of simple, flexible networking etc, it also has the phenomenon of packet loss etc when transmitting a large number of observational data due to the wireless relatively weak signal and narrow bandwidth. In view of high sampling rate instruments, this project uses data compression and effectively solves the problem of data transmission packet loss; Control commands, status data and observational data transmission use different priorities and means, which control the packet loss rate within

  12. Skin Friction Reduction Through Large-Scale Forcing

    NASA Astrophysics Data System (ADS)

    Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer

    2017-11-01

    Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.

  13. A non-accelerating foreshock sequence followed by a short period of quiescence for a large inland earthquake

    NASA Astrophysics Data System (ADS)

    Doi, I.; Kawakata, H.

    2012-12-01

    Laboratory experiments [e.g. Scholz, 1968; Lockner et al., 1992] and field observations [e.g. Dodge et al., 1996; Helmstetter and Sornette, 2003; Bouchon et al., 2011] have elucidated part of foreshock behavior and mechanism, but we cannot identify foreshocks while they are occurring. Recently, in Japan, a dense seismic network, Hi-net (High Sensitivity Seismograph Network), provides continuous waveform records for regional seismic events. The data from this network enable us to analyze small foreshocks which occur on long period time scales prior to a major event. We have an opportunity to grasp the more detailed pattern of foreshock generation. Using continuous waveforms recorded at a seismic station located in close proximity to the epicenter of the 2008 Iwate-Miyagi inland earthquake, we conducted a detailed investigation of its foreshocks. In addition to the two officially recognized foreshocks, calculation of cross-correlation coefficients between the continuous waveform record and one of the previously recognized foreshocks revealed that 20 micro foreshocks occurred within the same general area. Our analysis also shows that all of these foreshocks occurred within the same general area relative to the main event. Over the two week period leading up to the Iwate-Miyagi earthquake, such foreshocks only occurred during the last 45 minutes, specifically over a 35 minute period followed by a 10 minute period of quiescence just before the mainshock. We found no evidence of acceleration of this foreshock sequence. Rock fracturing experiments using a constant loading rate or creep tests have consistently shown that the occurrence rate of small fracturing events (acoustic emissions; AEs) increases before the main rupture [Scholz, 1968]. This accelerative pattern of preceding events was recognized in case of the 1999 Izmit earthquake [Bouchon et al., 2011]. Large earthquakes however need not be accompanied by acceleration of foreshocks if a given fault's host rock

  14. Earthquake Testing

    NASA Technical Reports Server (NTRS)

    1979-01-01

    During NASA's Apollo program, it was necessary to subject the mammoth Saturn V launch vehicle to extremely forceful vibrations to assure the moonbooster's structural integrity in flight. Marshall Space Flight Center assigned vibration testing to a contractor, the Scientific Services and Systems Group of Wyle Laboratories, Norco, California. Wyle-3S, as the group is known, built a large facility at Huntsville, Alabama, and equipped it with an enormously forceful shock and vibration system to simulate the liftoff stresses the Saturn V would encounter. Saturn V is no longer in service, but Wyle-3S has found spinoff utility for its vibration facility. It is now being used to simulate earthquake effects on various kinds of equipment, principally equipment intended for use in nuclear power generation. Government regulations require that such equipment demonstrate its ability to survive earthquake conditions. In upper left photo, Wyle3S is preparing to conduct an earthquake test on a 25ton diesel generator built by Atlas Polar Company, Ltd., Toronto, Canada, for emergency use in a Canadian nuclear power plant. Being readied for test in the lower left photo is a large circuit breaker to be used by Duke Power Company, Charlotte, North Carolina. Electro-hydraulic and electro-dynamic shakers in and around the pit simulate earthquake forces.

  15. Imaging the Fine-Scale Structure of the San Andreas Fault in the Northern Gabilan Range with Explosion and Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Xin, H.; Thurber, C. H.; Zhang, H.; Wang, F.

    2014-12-01

    A number of geophysical studies have been carried out along the San Andreas Fault (SAF) in the Northern Gabilan Range (NGR) with the purpose of characterizing in detail the fault zone structure. Previous seismic research has revealed the complex structure of the crustal volume in the NGR region in two-dimensions (Thurber et al., 1996, 1997), and there has been some work on the three-dimensional (3D) structure at a coarser scale (Lin and Roecker, 1997). In our study we use earthquake body-wave arrival times and differential times (P and S) and explosion arrival times (only P) to image the 3D P- and S-wave velocity structure of the upper crust along the SAF in the NGR using double-difference (DD) tomography. The earthquake and explosion data types have complementary strengths - the earthquake data have good resolution at depth and resolve both Vp and Vs structure, although only where there are sufficient seismic rays between hypocenter and stations, whereas the explosions contribute very good near-surface resolution but for P waves only. The original dataset analyzed by Thurber et al. (1996, 1997) included data from 77 local earthquakes and 8 explosions. We enlarge the dataset with 114 more earthquakes that occurred in the study area, obtain improved S-wave picks using an automated picker, and include absolute and cross-correlation differential times. The inversion code we use is the algorithm tomoDD (Zhang and Thurber, 2003). We assess how the P and S velocity models and earthquake locations vary as we alter the inversion parameters and the inversion grid. The new inversion results show clearly the fine-scale structure of the SAF at depth in 3D, sharpening the image of the velocity contrast from the southwest side to the northeast side.

  16. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  17. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  18. Social tension as precursor of large damaging earthquake: legend or reality?

    NASA Astrophysics Data System (ADS)

    Molchanov, O.

    2008-11-01

    Using case study of earthquake (EQ) activity and war conflicts in Caucasus during 1975 2002 time interval and correlation analysis of global distribution of damaging EQs and war-related social tension during 1901 2005 period we conclude:

    • There is a statistically reliable increase of social tension several years (or several months in case study) before damaging EQs,
    • There is evident decrease of social tension several years after damaging EQs, probably due to society consolidation,
    • Preseismic effect is absent for the large EQs in unpopulated areas,
    • There is some factual background for legendary belief in Almighty retribution for social abnormal behavior.

  19. Two critical tests for the Critical Point earthquake

    NASA Astrophysics Data System (ADS)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of

  20. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  1. Segmentation of the Calaveras-Hayward Fault System Based on 3-D Geometry and Geology at Large-Earthquake Depth

    NASA Astrophysics Data System (ADS)

    Graymer, R. W.; Simpson, R. W.; Jachens, R. C.; Ponce, D. A.; Phelps, G. A.; Watt, J. T.; Wentworth, C. M.

    2007-12-01

    For the purpose of estimating seismic hazard, the Calaveras and Hayward Faults have been considered as separate structures and analyzed and segmented based largely on their surface-trace geometry and the extent of the 1868 Hayward Fault earthquake. Recent relocations of earthquakes and 3-D geologic mapping have shown, however, that at depths associated with large earthquakes (>5 km) the fault geology and geometry is quite different than that at the surface. Using deep fault geometry inferred from these studies we treat the Hayward and Calaveras Faults as a single system and divide the system into segments that differ from the previously accepted segments as follows: 1. The Hayward Fault connects directly to the central Calaveras Fault at depth, as opposed to the 5 km wide restraining stepover zone of multiple imbricate oblique right-lateral reverse faults at the surface east of Fremont and San Jose (between about 37.25°-37.6°N). 2. The segment boundary between the Hayward, central Calaveras, and northern Calaveras is based on their Y- shaped intersection at depth near 37.40°N, 121.76°W (Cherry Flat Reservoir), about 8 km south of the previously accepted central-northern Calaveras Fault segment boundary. 3. The central Calaveras Fault is divided near 37.14°N, 121.56°W (southern end of Anderson Lake) into two subsegments based on a large discontinuity at depth seen in relocated seismicity. 4. The Hayward Fault is divided near 37.85°N, 122.23°W (Lake Temescal) into two segments based on a large contrast in fault face geology. This segmentation is similar to that based on the extent of 1868 fault rupture, but is now related to an underlying geologic cause. The direct connection of the Hayward and central Calaveras Faults at depth suggests that earthquakes larger than those previously modeled should be considered (~M6.9 for the southern Hayward, ~M7.2 for the southern Hayward plus northern central Calaveras). A NEHRP study by Witter and others (2003; NEHRP 03

  2. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  3. Dynamic evaluation of seismic hazard and risks based on the Unified Scaling Law for Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Nekrasova, A.

    2016-12-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A + B•(6 - M) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L, A characterizes the average annual rate of strong (M = 6) earthquakes, B determines the balance between magnitude ranges, and C estimates the fractal dimension of seismic locus in projection to the Earth surface. The parameters A, B, and C of USLE are used to assess, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity or paleo data), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures. The hazard maps for a given territory change dramatically, when the methodology is applied to a certain size moving time window, e.g. about a decade long for an intermediate-term regional assessment or exponentially increasing intervals for a daily local strong aftershock forecasting. The of dynamical seismic hazard and risks assessment is illustrated by applications to the territory of Greater Caucasus and Crimea and the two-year series of aftershocks of the 11 October 2008 Kurchaloy, Chechnya earthquake which case-history appears to be encouraging for further systematic testing as potential short-term forecasting tool.

  4. ShakeMapple : tapping laptop motion sensors to map the felt extents of an earthquake

    NASA Astrophysics Data System (ADS)

    Bossu, Remy; McGilvary, Gary; Kamb, Linus

    2010-05-01

    There is a significant pool of untapped sensor resources available in portable computer embedded motion sensors. Included primarily to detect sudden strong motion in order to park the disk heads to prevent damage to the disks in the event of a fall or other severe motion, these sensors may also be tapped for other uses as well. We have developed a system that takes advantage of the Apple Macintosh laptops' embedded Sudden Motion Sensors to record earthquake strong motion data to rapidly build maps of where and to what extent an earthquake has been felt. After an earthquake, it is vital to understand the damage caused especially in urban environments as this is often the scene for large amounts of damage caused by earthquakes. Gathering as much information from these impacts to determine where the areas that are likely to be most effected, can aid in distributing emergency services effectively. The ShakeMapple system operates in the background, continuously saving the most recent data from the motion sensors. After an earthquake has occurred, the ShakeMapple system calculates the peak acceleration within a time window around the expected arrival and sends that to servers at the EMSC. A map plotting the felt responses is then generated and presented on the web. Because large-scale testing of such an application is inherently difficult, we propose to organize a broadly distributed "simulated event" test. The software will be available for download in April, after which we plan to organize a large-scale test by the summer. At a specified time, participating testers will be asked to create their own strong motion to be registered and submitted by the ShakeMapple client. From these responses, a felt map will be produced representing the broadly-felt effects of the simulated event.

  5. Geometry of a large-scale, low-angle, midcrustal thrust (Woodroffe Thrust, central Australia)

    NASA Astrophysics Data System (ADS)

    Wex, S.; Mancktelow, N. S.; Hawemann, F.; Camacho, A.; Pennacchioni, G.

    2017-11-01

    The Musgrave Block in central Australia exposes numerous large-scale mylonitic shear zones developed during the intracontinental Petermann Orogeny around 560-520 Ma. The most prominent structure is the crustal-scale, over 600 km long, E-W trending Woodroffe Thrust, which is broadly undulate but generally dips shallowly to moderately to the south and shows an approximately top-to-north sense of movement. The estimated metamorphic conditions of mylonitization indicate a regional variation from predominantly midcrustal (circa 520-620°C and 0.8-1.1 GPa) to lower crustal ( 650°C and 1.0-1.3 GPa) levels in the direction of thrusting, which is also reflected in the distribution of preserved deformation microstructures. This variation in metamorphic conditions is consistent with a south dipping thrust plane but is only small, implying that a ≥60 km long N-S segment of the Woodroffe Thrust was originally shallowly dipping at an average estimated angle of ≤6°. The reconstructed geometry suggests that basement-cored, thick-skinned, midcrustal thrusts can be very shallowly dipping on a scale of many tens of kilometers in the direction of movement. Such a geometry would require the rocks along the thrust to be weak, but field observations (e.g., large volumes of syntectonic pseudotachylyte) argue for a strong behavior, at least transiently. Localization on a low-angle, near-planar structure that crosscuts lithological layers requires a weak precursor, such as a seismic rupture in the middle to lower crust. If this was a single event, the intracontinental earthquake must have been large, with the rupture extending laterally over hundreds of kilometers.

  6. Strike-slip earthquakes can also be detected in the ionosphere

    NASA Astrophysics Data System (ADS)

    Astafyeva, Elvira; Rolland, Lucie M.; Sladen, Anthony

    2014-11-01

    It is generally assumed that co-seismic ionospheric disturbances are generated by large vertical static displacements of the ground during an earthquake. Consequently, it is expected that co-seismic ionospheric disturbances are only observable after earthquakes with a significant dip-slip component. Therefore, earthquakes dominated by strike-slip motion, i.e. with very little vertical co-seismic component, are not expected to generate ionospheric perturbations. In this work, we use total electron content (TEC) measurements from ground-based GNSS-receivers to study ionospheric response to six recent largest strike-slip earthquakes: the Mw7.8 Kunlun earthquake of 14 November 2001, the Mw8.1 Macquarie earthquake of 23 December 2004, the Sumatra earthquake doublet, Mw8.6 and Mw8.2, of 11 April 2012, the Mw7.7 Balochistan earthquake of 24 September 2013 and the Mw 7.7 Scotia Sea earthquake of 17 November 2013. We show that large strike-slip earthquakes generate large ionospheric perturbations of amplitude comparable with those induced by dip-slip earthquakes of equivalent magnitude. We consider that in the absence of significant vertical static co-seismic displacements of the ground, other seismological parameters (primarily the magnitude of co-seismic horizontal displacements, seismic fault dimensions, seismic slip) may contribute in generation of large-amplitude ionospheric perturbations.

  7. High Attenuation Rate for Shallow, Small Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe

    2017-09-01

    We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.

  8. Prevalence and psychosocial risk factors of PTSD: 18 months after Kashmir earthquake in Pakistan.

    PubMed

    Naeem, Farooq; Ayub, Muhammad; Masood, Khadija; Gul, Huma; Khalid, Mahwish; Farrukh, Ammara; Shaheen, Aisha; Waheed, Waquas; Chaudhry, Haroon Rasheed

    2011-04-01

    On average in a year 939 earthquakes of a magnitude between 5 and 8 on the Richter scale occur around the world. In earthquakes developing countries are prone to large-scale destruction because of poor structural quality of buildings, and preparedness for earthquakes. On 8th October 2005, a major earthquake hit the remote and mountainous region of northern Pakistan and Kashmir. We wanted to find out the rate of PTSD in a randomly selected sample of participants living in earthquake area and the correlates of the PTSD. The study was conducted 18 months after the earthquake. We selected a sample of men and women living in the houses and tents for interviews. Using well established instruments for PTSD and general psychiatric morbidity we gathered information from over 1200 people in face to face interviews. We gathered information about trauma exposure and loss as well. 55.2% women and 33.4% men suffered from PTSD. Living in a joint family was protective against the symptoms of PTSD. Dose of exposure to trauma was associated with the symptoms of PTSD. Living in a tent was associated with general psychiatric morbidity but not with PTSD. We used questionnaire instead of interviews to detect the symptoms of psychiatric disorders. The symptoms of PTSD are common 18 months after the earthquake and they are specifically associated with the dose of trauma exposure. This may have implications for rehabilitation of this population. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed Central

    Kanamori, H

    1996-01-01

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657

  10. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  11. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  12. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  13. High-resolution earthquake relocation in the Fort Worth and Permian Basins using regional seismic stations

    NASA Astrophysics Data System (ADS)

    Ogwari, P.; DeShon, H. R.; Hornbach, M.

    2017-12-01

    Post-2008 earthquake rate increases in the Central United States have been associated with large-scale subsurface disposal of waste-fluids from oil and gas operations. The beginning of various earthquake sequences in Fort Worth and Permian basins have occurred in the absence of seismic stations at local distances to record and accurately locate hypocenters. Most typically, the initial earthquakes have been located using regional seismic network stations (>100km epicentral distance) and using global 1D velocity models, which usually results in large location uncertainty, especially in depth, does not resolve magnitude <2.5 events, and does not constrain the geometry of the activated fault(s). Here, we present a method to better resolve earthquake occurrence and location using matched filters and regional relative location when local data becomes available. We use the local distance data for high-resolution earthquake location, identifying earthquake templates and accurate source-station raypath velocities for the Pg and Lg phases at regional stations. A matched-filter analysis is then applied to seismograms recorded at US network stations and at adopted TA stations that record the earthquakes before and during the local network deployment period. Positive detections are declared based on manual review of associated with P and S arrivals on local stations. We apply hierarchical clustering to distinguish earthquakes that are both spatially clustered and spatially separated. Finally, we conduct relative earthquake and earthquake cluster location using regional station differential times. Initial analysis applied to the 2008-2009 DFW airport sequence in north Texas results in time continuous imaging of epicenters extending into 2014. Seventeen earthquakes in the USGS earthquake catalog scattered across a 10km2 area near DFW airport are relocated onto a single fault using these approaches. These techniques will also be applied toward imaging recent earthquakes in the

  14. Landslides Triggered by the 2015 Gorkha, Nepal Earthquake

    NASA Astrophysics Data System (ADS)

    Xu, C.

    2018-04-01

    The 25 April 2015 Gorkha Mw 7.8 earthquake in central Nepal caused a large number of casualties and serious property losses, and also induced numerous landslides. Based on visual interpretation of high-resolution optical satellite images pre- and post-earthquake and field reconnaissance, we delineated 47,200 coseismic landslides with a total distribution extent more than 35,000 km2, which occupy a total area about 110 km2. On the basis of a scale relationship between landslide area (A) and volume (V), V = 1.3147 × A1.2085, the total volume of the coseismic landslides is estimated to be about 9.64 × 108 m3. Calculation yields that the landslide number density, area density, and volume density are 1.32 km-2, 0.31 %, and 0.027 m, respectively. The spatial distribution of these landslides is consistent with that of the mainshock and aftershocks and the inferred causative fault, indicating the effect of the earthquake energy release on the pattern on coseismic landslides. This study provides a new, more detailed and objective inventory of the landslides triggered by the Gorkha earthquake, which would be significant for further study of genesis of coseismic landslides, hazard assessment and the long-term impact of the slope failure on the geological environment in the earthquake-scarred region.

  15. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  16. Tremors behind the power outlet - where earthquakes appear on our monthly bill

    NASA Astrophysics Data System (ADS)

    Baisch, Stefan

    2013-04-01

    The world's appetite for energy has significantly increased over the last decades, not least due to the rapid growth of Asian economies. In parallel, the Fukushima shock raised widespread concerns against nuclear power generation and an increasing desire for clean energy technologies. To solve the conflict of higher demands, limited resources and a growing level of green consciousness, both up-scaling of conventional and development of renewable energy technologies are required. This is where the phenomenon of man-made earthquakes appears on the radar screen. Several of our energy production technologies have the potential to cause small, moderate, or sometimes even larger magnitude earthquakes. There is a general awareness that coal mining activities can produce moderate sized earthquakes. Similarly, long-term production from hydrocarbon reservoirs can lead to subsurface deformations accompanied by even larger magnitude earthquakes. Even the "renewables" are not necessarily earthquake-free. Several of the largest man-made earthquakes have been caused by water impoundment for hydropower plants. On a much smaller scale, micro earthquakes can occur in enhanced geothermal systems (EGS). Although still in its infancy, the EGS technology has an enormous potential to supply base load electricity, and its technical feasibility for a large scale application is currently being investigated in about a dozen pilot projects. The principal concept of heat extraction by circulating water through a subsurface reservoir is fairly simple, the technical implementation of EGS, however, exhibits several challenges not all of which are yet being solved. As the hydraulic conductivity at depth is usually extremely low at EGS sites, a technical stimulation of hydraulic pathways is required for creating an artificial heat exchanger. By injecting fluid under high pressure into the subsurface, tectonic stress on existing fractures can be released and the associated shearing of the fractures

  17. Revealing the deformational anomalies based on GNSS data in relation to the preparation and stress release of large earthquakes

    NASA Astrophysics Data System (ADS)

    Kaftan, V. I.; Melnikov, A. Yu.

    2018-01-01

    The results of Global Navigational Satellite System (GNSS) observations in the regions of large earthquakes are analyzed. The characteristics of the Earth's surface deformations before, during, and after the earthquakes are considered. The obtained results demonstrate the presence of anomalous deformations close to the epicenters of the events. Statistical estimates of the anomalous strains and their relationship with measurement errors are obtained. Conclusions are drawn about the probable use of local GNSS networks to assess the risk of the occurrence of strong seismic events.

  18. Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.

    2014-05-01

    The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily

  19. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    NASA Astrophysics Data System (ADS)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  20. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    NASA Astrophysics Data System (ADS)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  1. Effects of deep basins on structural collapse during large subduction earthquakes

    USGS Publications Warehouse

    Marafi, Nasser A.; Eberhard, Marc O.; Berman, Jeffrey W.; Wirth, Erin A.; Frankel, Arthur

    2017-01-01

    Deep sedimentary basins are known to increase the intensity of ground motions, but this effect is implicitly considered in seismic hazard maps used in U.S. building codes. The basin amplification of ground motions from subduction earthquakes is particularly important in the Pacific Northwest, where the hazard at long periods is dominated by such earthquakes. This paper evaluates the effects of basins on spectral accelerations, ground-motion duration, spectral shape, and structural collapse using subduction earthquake recordings from basins in Japan that have similar depths as the Puget Lowland basin. For three of the Japanese basins and the Puget Lowland basin, the spectral accelerations were amplified by a factor of 2 to 4 for periods above 2.0 s. The long-duration subduction earthquakes and the effects of basins on spectral shape combined, lower the spectral accelerations at collapse for a set of building archetypes relative to other ground motions. For the hypothetical case in which these motions represent the entire hazard, the archetypes would need to increase up to 3.3 times its strength to compensate for these effects.

  2. Source parameters of the 1999 Osa peninsula (Costa Rica) earthquake sequence from spectral ratios analysis

    NASA Astrophysics Data System (ADS)

    Verdecchia, A.; Harrington, R. M.; Kirkpatrick, J. D.

    2017-12-01

    Many observations suggest that duration and size scale in a self-similar way for most earthquakes. Deviations from the expected scaling would suggest that some physical feature on the fault surface influences the speed of rupture differently at different length scales. Determining whether differences in scaling exist between small and large earthquakes is complicated by the fact that duration estimates of small earthquakes are often distorted by travel-path and site effects. However, when carefully estimated, scaling relationships between earthquakes may provide important clues about fault geometry and the spatial scales over which it affects fault rupture speed. The Mw 6.9, 20 August 1999, Quepos earthquake occurred on the plate boundary thrust fault along southern Costa Rica margin where the subducting seafloor is cut by numerous normal faults. The mainshock and aftershock sequence were recorded by land and (partially by) ocean bottom (OBS) seismic arrays deployed as part of the CRSEIZE experiment. Here we investigate the size-duration scaling of the mainshock and relocated aftershocks on the plate boundary to determine if a change in scaling exists that is consistent with a change in fault surface geometry at a specific length scale. We use waveforms from 5 short-period land stations and 12 broadband OBS stations to estimate corner frequencies (the inverse of duration) and seismic moment for several aftershocks on the plate interface. We first use spectral amplitudes of single events to estimate corner frequencies and seismic moments. We then adopt a spectral ratio method to correct for non-source-related effects and refine the corner frequency estimation. For the spectral ratio approach, we use pairs of earthquakes with similar waveforms (correlation coefficient > 0.7), with waveform similarity implying event co-location. Preliminary results from single spectra show similar corner frequency values among events of 0.5 ≤ M ≤ 3.6, suggesting a decrease in

  3. Real-time GPS integration for prototype earthquake early warning and near-field imaging of the earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.

    2011-12-01

    Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very

  4. Fractals and Forecasting in Earthquakes and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  5. Geological control of earthquake induced landslide in El Salvador

    NASA Astrophysics Data System (ADS)

    Tsige Aga, Meaza

    2010-05-01

    Geological control of earthquake induced landslides in El Salvador. M., Tsige(1), I., Garcia-Flórez(1), R., Mateos(2) (1)Universidad Complutense de Madrid, Facultad de Geología, Madrid, Spain, (meaza@geo.ucm.es) (2)IGME, Mallorca El Salvador is located at one of the most seismically active areas en Central America, and suffered severe damage and loss of life in historical and recent earthquakes, as a consequence of earthquake induced landslides. The most common landslides were shallow disrupted soil-slides on steep slopes and were particularly dense in the central part of the country. Most of them are cited in the recent mechanically weak volcanic pyroclastic deposits known as "Tierra Blanca" and "Tierra Color Café" which are prone to seismic wave amplification and are supposed to have contributed to the triggering of some of the hundreds of landslides related to the 2001 (Mw = 7.6 and Mw = 6.7), seismic events. The earthquakes also triggered numerous deep large scale landslides responsible for the enormous devastation of villages and towns and are the source for the current high seismic hazard as well. Many of these landslides are located at distances more than 50 and 100 km from the focal distance, although some of them occurred at near field. Until now there has been little effort to explain the causes and concentration of the deep large-scale landslides especially their distribution, failure mechanism and post-rapture behavior of the landslide mass (long run-out). It has been done a field investigation of landslides, geological materiales and interpretation of aerial photographs taken before and after the two 2001 (Mw= 7.6 and Mw= 6.7) El Salvador earthquakes. The result of the study showed that most of the large-scale landslides occured as coherent block slides with the sliding surface parallel to a pre-existing fractures and fault planes (La Leona, Barriolera, El Desague, Jiboa landslides). Besides that the pre-existing fractures are weak zones controlling

  6. What Controls Subduction Earthquake Size and Occurrence?

    NASA Astrophysics Data System (ADS)

    Ruff, L. J.

    2008-12-01

    hypotheses except one. This falsification process requires both concentrated multidisciplinary efforts and patience. Large earthquake recurrence intervals in the same subduction zone segment display a significant, and therefore unfortunate, variability. Over the years, many of us have devised simple models to explain this variability. Of course, there are also more complicated explanations with many additional model parameters. While there has been important observational progress as both historical and paleo-seismological studies continue to add more data pairs of fault length and recurrence intervals, there has been a frustrating lack of progress in elimination of candidate models or processes that explain recurrence time variability. Some of the simple models for recurrence times offer a probabilistic or even deterministic prediction of future recurrence times - and have been used for hazards evaluation. It is important to know if these models are correct. Since we do not have the patience to wait for a strict statistical test, we must find other ways to test these ideas. For example, some of the simple deterministic models for along-strike segment interaction make predictions for variation in tectonic stress state that can be tested during the inter-seismic period. We have seen how some observational discoveries in the past decade (e.g., the episodic creep events down-dip of the seismogenic zone) give us additional insight into the physical processes in subduction zones; perhaps multi-disciplinary studies of subduction zones will discover a new way to reliably infer large-scale shear stresses on the plate interface?

  7. A Possible Explanation for the Absence of Large Tsunami Following the Earthquake of March 28, 2005 in the Northern Sumatra: No Major Submarine Landslide

    NASA Astrophysics Data System (ADS)

    Lee, S.-M.

    2005-05-01

    In just over three months, two large earthquakes (magnitudes Mw = 9.0 and 8.7), separated only by a few hundred kilometers in epicenter distance, shook the fore-arc region of the northern Sumatra. According to preliminary reports released by USGS (http://neic.usgs.gov), the seismic moment tensor solutions of the two events match quite well, suggesting that the movement of fault blocks that triggered them was similar. Yet the two earthquakes had drastically different consequence: the December 2004 earthquake triggered a catastrophic tsunami whereas the March 2005 earthquake did not. This difference raises an important question that the December 2004 tsunami was not actually triggered by the faulting itself but by submarine landslide. Earthquake-triggered submarine landslides can sometimes be overlooked as the direct cause of major tsunamis because their location often coincides with the fault rupture zones, but are known to be an important source especially along the active margins with high sedimentation rate. Scientists suspect that a similar event happened on July 17, 1998, when a magnitude 7.0 earthquake triggered by low-angle thrust fault caused a submarine slumping, which in turn generated the tsunami that devastated the coastal region in NW Papua New Guinea, killing more than 2000 human lives. If this was the case in Sumatra, it explains why a major tsunami did not occur following the March 2005 earthquake. A large amount of the sediment deposited along the continental margin by the erosion of high mountain ranges of Sumatra had already slid down the continental slope during the earthquake on December 26, 2004, and therefore not much volume of sediment was left to slide down and generate another major tsunami. The submarine topography may have also been a factor as the area around the epicenter of March 2005 earthquake has a longer extent of steep down-slope section compared to that of December 2004. In addition, the region around December 2004 earthquake has

  8. Large-Scale Outflows in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  9. Dynamic triggering potential of large earthquakes recorded by the EarthScope U.S. Transportable Array using a frequency domain detection method

    NASA Astrophysics Data System (ADS)

    Linville, L. M.; Pankow, K. L.; Kilb, D. L.; Velasco, A. A.; Hayward, C.

    2013-12-01

    Because of the abundance of data from the Earthscope U.S. Transportable Array (TA), data paucity and station sampling bias in the US are no longer significant obstacles to understanding some of the physical parameters driving dynamic triggering. Initial efforts to determine locations of dynamic triggering in the US following large earthquakes (M ≥ 8.0) during TA relied on a time domain detection algorithm which used an optimized short-term average to long-term average (STA/LTA) filter and resulted in an unmanageably large number of false positive detections. Specific site sensitivities and characteristic noise when coupled with changes in detection rates often resulted in misleading output. To navigate this problem, we develop a frequency domain detection algorithm that first pre-whitens each seismogram and then computes a broadband frequency stack of the data using a three hour time window beginning at the origin time of the mainshock. This method is successful because of the broadband nature of earthquake signals compared with the more band-limited high frequency picks that clutter results from time domain picking algorithms. Preferential band filtering of the frequency stack for individual events can further increase the accuracy and drive the detection threshold to below magnitude one, but at general cost to detection levels across large scale data sets. Of the 15 mainshocks studied, 12 show evidence of discrete spatial clusters of local earthquake activity occurring within the array during the mainshock coda. Most of this activity is in the Western US with notable sequences in Northwest Wyoming, Western Texas, Southern New Mexico and Western Montana. Repeat stations (associated with 2 or more mainshocks) are generally rare, but when occur do so exclusively in California and Nevada. Notably, two of the most prolific regions of seismicity following a single mainshock occur following the 2009 magnitude 8.1 Samoa (Sep 29, 2009, 17:48:10) event, in areas with few

  10. Rapid estimation of the economic consequences of global earthquakes

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    to reduce this time gap to more rapidly and effectively mobilize response. We present here a procedure to rapidly and approximately ascertain the economic impact immediately following a large earthquake anywhere in the world. In principle, the approach presented is similar to the empirical fatality estimation methodology proposed and implemented by Jaiswal and others (2009). In order to estimate economic losses, we need an assessment of the economic exposure at various levels of shaking intensity. The economic value of all the physical assets exposed at different locations in a given area is generally not known and extremely difficult to compile at a global scale. In the absence of such a dataset, we first estimate the total Gross Domestic Product (GDP) exposed at each shaking intensity by multiplying the per-capita GDP of the country by the total population exposed at that shaking intensity level. We then scale the total GDP estimated at each intensity by an exposure correction factor, which is a multiplying factor to account for the disparity between wealth and/or economic assets to the annual GDP. The economic exposure obtained using this procedure is thus a proxy estimate for the economic value of the actual inventory that is exposed to the earthquake. The economic loss ratio, defined in terms of a country-specific lognormal cumulative distribution function of shaking intensity, is derived and calibrated against the losses from past earthquakes. This report describes the development of a country or region-specific economic loss ratio model using economic loss data available for global earthquakes from 1980 to 2007. The proposed model is a potential candidate for directly estimating economic losses within the currently-operating PAGER system. PAGER's other loss models use indirect methods that require substantially more data (such as building/asset inventories, vulnerabilities, and the asset values exposed at the time of earthquake) to implement on a global basis

  11. Earthquake damage to transportation systems

    USGS Publications Warehouse

    McCullough, Heather

    1994-01-01

    Earthquakes represent one of the most destructive natural hazards known to man. A large magnitude earthquake near a populated area can affect residents over thousands of square kilometers and cause billions of dollars in property damage. Such an event can kill or injure thousands of residents and disrupt the socioeconomic environment for months, sometimes years. A serious result of a large-magnitude earthquake is the disruption of transportation systems, which limits post-disaster emergency response. Movement of emergency vehicles, such as police cars, fire trucks and ambulances, is often severely restricted. Damage to transportation systems is categorized below by cause including: ground failure, faulting, vibration damage, and tsunamis.

  12. Dual megathrust slip behaviors of the 2014 Iquique earthquake sequence

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen; Huang, Hui; Bürgmann, Roland; Ampuero, Jean Paul; Strader, Anne

    2015-02-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A Mw 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of northern Chile. This event was preceded by a long foreshock sequence including a 2-week-long migration of seismicity initiated by a Mw 6.7 earthquake. Repeating earthquakes were found among the foreshock sequence that migrated towards the mainshock hypocenter, suggesting a large-scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence times of the repeating earthquakes highlight the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while repeaters that occurred both before and after the mainshock were in the area complementary to the mainshock rupture. The spatiotemporal distribution of the repeating earthquakes illustrates the essential role of propagating aseismic slip leading up to the mainshock and illuminates the distribution of postseismic afterslip. Various finite fault models indicate that the largest coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show an emergent onset of moment rate at low frequency (< 0.1 Hz), while back-projection shows a steady increase of high frequency power (> 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the rupture expands in rich bursts along the rim of a semi-elliptical region with episodes of re-ruptures, suggesting delayed failure of asperities. The high-frequency rupture remains within an

  13. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  14. Synchronization of coupled large-scale Boolean networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fangfei, E-mail: li-fangfei@163.com

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  15. Depression and anxiety among elderly earthquake survivors in China.

    PubMed

    Liang, Ying

    2017-12-01

    This study investigated depression and anxiety among Chinese elderly earthquake survivors, addressing relevant correlations. We sampled one earthquake-prone city, utilising the Geriatric Depression Scale and Beck Anxiety Inventory. In addition, explorative factor analysis and structural equation model methods were used. Results indicated elderly earthquake survivors exhibited symptoms of moderate depression and anxiety; depression and anxiety are highly positively correlated. The overlap between these two psychological problems may be due to subjective fear and motoric dimensions; subjective fear and motoric dimensions of Beck Anxiety Inventory are more strongly related to Geriatric Depression Scale domains. The two scales exhibit high reliability and validity.

  16. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  17. Earthquake Occurrence in Bangladesh and Surrounding Region

    NASA Astrophysics Data System (ADS)

    Al-Hussaini, T. M.; Al-Noman, M.

    2011-12-01

    The collision of the northward moving Indian plate with the Eurasian plate is the cause of frequent earthquakes in the region comprising Bangladesh and neighbouring India, Nepal and Myanmar. Historical records indicate that Bangladesh has been affected by five major earthquakes of magnitude greater than 7.0 (Richter scale) during 1869 to 1930. This paper presents some statistical observations of earthquake occurrence in fulfilment of a basic groundwork for seismic hazard assessment of this region. An up to date catalogue covering earthquake information in the region bounded within 17°-30°N and 84°-97°E for the period of historical period to 2010 is derived from various reputed international sources including ISC, IRIS, Indian sources and available publications. Careful scrutiny is done to remove duplicate or uncertain earthquake events. Earthquake magnitudes in the range of 1.8 to 8.1 have been obtained and relationships between different magnitude scales have been studied. Aftershocks are removed from the catalogue using magnitude dependent space window and time window. The main shock data are then analyzed to obtain completeness period for different magnitudes evaluating their temporal homogeneity. Spatial and temporal distribution of earthquakes, magnitude-depth histograms and other statistical analysis are performed to understand the distribution of seismic activity in this region.

  18. Catastrophic valley fills record large Himalayan earthquakes, Pokhara, Nepal

    NASA Astrophysics Data System (ADS)

    Stolle, Amelie; Bernhardt, Anne; Schwanghart, Wolfgang; Hoelzmann, Philipp; Adhikari, Basanta R.; Fort, Monique; Korup, Oliver

    2017-12-01

    Uncertain timing and magnitudes of past mega-earthquakes continue to confound seismic risk appraisals in the Himalayas. Telltale traces of surface ruptures are rare, while fault trenches document several events at best, so that additional proxies of strong ground motion are needed to complement the paleoseismological record. We study Nepal's Pokhara basin, which has the largest and most extensively dated archive of earthquake-triggered valley fills in the Himalayas. These sediments form a 148-km2 fan that issues from the steep Seti Khola gorge in the Annapurna Massif, invading and plugging 15 tributary valleys with tens of meters of debris, and impounding several lakes. Nearly a dozen new radiocarbon ages corroborate at least three episodes of catastrophic sedimentation on the fan between ∼700 and ∼1700 AD, coinciding with great earthquakes in ∼1100, 1255, and 1344 AD, and emplacing roughly >5 km3 of debris that forms the Pokhara Formation. We offer a first systematic sedimentological study of this formation, revealing four lithofacies characterized by thick sequences of mid-fan fluvial conglomerates, debris-flow beds, and fan-marginal slackwater deposits. New geochemical provenance analyses reveal that these upstream dipping deposits of Higher Himalayan origin contain lenses of locally derived river clasts that mark time gaps between at least three major sediment pulses that buried different parts of the fan. The spatial pattern of 14C dates across the fan and the provenance data are key to distinguishing these individual sediment pulses, as these are not evident from their sedimentology alone. Our study demonstrates how geomorphic and sedimentary evidence of catastrophic valley infill can help to independently verify and augment paleoseismological fault-trench records of great Himalayan earthquakes, while offering unparalleled insights into their long-term geomorphic impacts on major drainage basins.

  19. The 2011 M = 9.0 Tohoku oki earthquake more than doubled the probability of large shocks beneath Tokyo

    USGS Publications Warehouse

    Toda, Shinji; Stein, Ross S.

    2013-01-01

    1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 earthquakes in the past 400 years. Immediately after the Tohoku earthquake, the seismicity rate in the corridor jumped 10-fold, while the rate of normal focal mechanisms dropped in half. The seismicity rate decayed for 6–12 months, after which it steadied at three times the pre-Tohoku rate. The seismicity rate jump and decay to a new rate, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a rate/state Coulomb model, which we use to forecast the time-dependent probability of large earthquakes in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku earthquake not struck

  20. Solar wind ion density variations that preceded the M6+ earthquakes occurring on a global scale between 3 and 15 September 2013

    NASA Astrophysics Data System (ADS)

    Cataldi, Gabriele; Cataldi, Daniele; Straser, Valentino

    2015-04-01

    Between 3 and 15 September 2013 on Earth were recorded nine M6+ earthquakes: Canada M6,1 earthquake occurred on 3 September at 20:19 UTC; Japan M6,5 earthquake occurred on 4 September at 00:18 UTC; Canada M6,0 earthquake occurred on 4 September at 00:23 UTC; Alaska M6,5 earthquake occurred on 4 September at 02:32 UTC; Alaska M6,0 earthquake occurred on 4 September at 06:27 UTC; Northern Mid-Atlantic Ridge M6,0 earthquake occurred on 5 September at 04:01 UTC; Guatemala M6,4 earthquake occurred on 7 September at 00:13 UTC; Central East Pacific Rise M6,1 earthquake occurred on 11 September at 12:44 UTC; Alaska M6,1 earthquake occurred on 15 September at 16:21 UTC. The authors analyzed the modulation of solar wind ion density during the period from 1 to 18 September 2013 to determine whether the nine earthquakes were preceded by a variations of the solar wind ion density and for testing a method to be applied in the future also for the prediction of tsunami. The data on ion density used to realize the correlation study are represented by: solar wind ion density variation detected by ACE (Advanced Composition Explorer) Satellite, in orbit near the L1 Lagrange point, at 1.5 million of km from Earth, in direction of the Sun. The instrument used to perform the measurement of the solar wind ion density is the Electron, Proton, and Alpha Monitor (EPAM) instrument, equipped on the ACE Satellite. To conduct the study, the authors have taken in consideration the variation of the solar wind protons density that have these characteristics: differential proton flux 1060-1900 keV (p/cm^2-sec-ster-MeV); differential proton flux 761-1220 keV (p/cm^2-sec-ster-MeV); differential proton flux 310-580 keV (p/cm^2-sec-ster-MeV) and differential proton flux 115-195 keV (p/cm^2-sec-ster-MeV). This data set has been marked with the times (time markers) of M6+ earthquakes occurred on a global scale (the data on M6+ seismic activity are provided in real time by USGS, INGV and the CSEM) between

  1. Earthquakes in Virginia and vicinity 1774 - 2004

    USGS Publications Warehouse

    Tarr, Arthur C.; Wheeler, Russell L.

    2006-01-01

    This map summarizes two and a third centuries of earthquake activity. The seismic history consists of letters, journals, diaries, and newspaper and scholarly articles that supplement seismograph recordings (seismograms) dating from the early twentieth century to the present. All of the pre-instrumental (historical) earthquakes were large enough to be felt by people or to cause shaking damage to buildings and their contents. Later, widespread use of seismographs meant that tremors too small or distant to be felt could be detected and accurately located. Earthquakes are a legitimate concern in Virginia and parts of adjacent States. Moderate earthquakes cause slight local damage somewhere in the map area about twice a decade on the average. Additionally, many buildings in the map area were constructed before earthquake protection was added to local building codes. The large map shows all historical and instrumentally located earthquakes from 1774 through 2004.

  2. Revisiting the 1872 Owens Valley, California, Earthquake

    USGS Publications Warehouse

    Hough, S.E.; Hutton, K.

    2008-01-01

    The 26 March 1872 Owens Valley earthquake is among the largest historical earthquakes in California. The felt area and maximum fault displacements have long been regarded as comparable to, if not greater than, those of the great San Andreas fault earthquakes of 1857 and 1906, but mapped surface ruptures of the latter two events were 2-3 times longer than that inferred for the 1872 rupture. The preferred magnitude estimate of the Owens Valley earthquake has thus been 7.4, based largely on the geological evidence. Reinterpreting macroseismic accounts of the Owens Valley earthquake, we infer generally lower intensity values than those estimated in earlier studies. Nonetheless, as recognized in the early twentieth century, the effects of this earthquake were still generally more dramatic at regional distances than the macroseismic effects from the 1906 earthquake, with light damage to masonry buildings at (nearest-fault) distances as large as 400 km. Macroseismic observations thus suggest a magnitude greater than that of the 1906 San Francisco earthquake, which appears to be at odds with geological observations. However, while the mapped rupture length of the Owens Valley earthquake is relatively low, the average slip was high. The surface rupture was also complex and extended over multiple fault segments. It was first mapped in detail over a century after the earthquake occurred, and recent evidence suggests it might have been longer than earlier studies indicated. Our preferred magnitude estimate is Mw 7.8-7.9, values that we show are consistent with the geological observations. The results of our study suggest that either the Owens Valley earthquake was larger than the 1906 San Francisco earthquake or that, by virtue of source properties and/or propagation effects, it produced systematically higher ground motions at regional distances. The latter possibility implies that some large earthquakes in California will generate significantly larger ground motions than San

  3. Evidence for and implications of self-healing pulses of slip in earthquake rupture

    USGS Publications Warehouse

    Heaton, T.H.

    1990-01-01

    Dislocation time histories of models derived from waveforms of seven earthquakes are discussed. In each model, dislocation rise times (the duration of slip for a given point on the fault) are found to be short compared to the overall duration of the earthquake (??? 10%). However, in many crack-like numerical models of dynamic rupture, the slip duration at a given point is comparable to the overall duration of the rupture; i.e. slip at a given point continues until information is received that the rupture has stopped propagating. Alternative explanations for the discrepancy between the short slip durations used to model waveforms and the long slip durations inferred from dynamic crack models are: (1) the dislocation models are unable to resolve the relatively slow parts of earthquake slip and have seriously underestimated the dislocations for these earthquakes; (2) earthquakes are composed of a sequence of small-dimension (short duration) events that are separated by locked regions (barriers); (3) rupture occurs in a narrow self-healing pulse of slip that travels along the fault surface. Evidence is discussed that suggests that slip durations are indeed short and that the self-healing slip-pulse model is the most appropriate explanation. A qualitative model is presented that produces self-healing slip pulses. The key feature of the model is the assumption that friction on the fault surface is inversely related to the local slip velocity. The model has the following features: high static strength of materials (kilobar range), low static stress drops (in the range of tens of bars), and relatively low frictional stress during slip (less than several hundreds of bars). It is suggested that the reason that the average dislocation scales with fault length is because large-amplitude slip pulses are difficult to stop and hence tend to propagate large distances. This model may explain why seismicity and ambient stress are low along fault segments that have experienced large

  4. Dissecting the large-scale galactic conformity

    NASA Astrophysics Data System (ADS)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  5. Holocene slip rates along the San Andreas Fault System in the San Gorgonio Pass and implications for large earthquakes in southern California

    NASA Astrophysics Data System (ADS)

    Heermance, Richard V.; Yule, Doug

    2017-06-01

    The San Gorgonio Pass (SGP) in southern California contains a 40 km long region of structural complexity where the San Andreas Fault (SAF) bifurcates into a series of oblique-slip faults with unknown slip history. We combine new 10Be exposure ages (Qt4: 8600 (+2100, -2200) and Qt3: 5700 (+1400, -1900) years B.P.) and a radiocarbon age (1260 ± 60 years B.P.) from late Holocene terraces with scarp displacement of these surfaces to document a Holocene slip rate of 5.7 (+2.7, -1.5) mm/yr combined across two faults. Our preferred slip rate is 37-49% of the average slip rates along the SAF outside the SGP (i.e., Coachella Valley and San Bernardino sections) and implies that strain is transferred off the SAF in this area. Earthquakes here most likely occur in very large, throughgoing SAF events at a lower recurrence than elsewhere on the SAF, so that only approximately one third of SAF ruptures penetrate or originate in the pass.Plain Language SummaryHow <span class="hlt">large</span> are <span class="hlt">earthquakes</span> on the southern San Andreas Fault? The answer to this question depends on whether or not the <span class="hlt">earthquake</span> is contained only along individual fault sections, such as the Coachella Valley section north of Palm Springs, or the rupture crosses multiple sections including the area through the San Gorgonio Pass. We have determined the age and offset of faulted stream deposits within the San Gorgonio Pass to document slip rates of these faults over the last 10,000 years. Our results indicate a long-term slip rate of 6 mm/yr, which is almost 1/2 of the rates east and west of this area. These new rates, combined with faulted geomorphic surfaces, imply that <span class="hlt">large</span> magnitude <span class="hlt">earthquakes</span> must occasionally rupture a 300 km length of the San Andreas Fault from the Salton Sea to the Mojave Desert. Although many ( 65%) <span class="hlt">earthquakes</span> along the southern San Andreas Fault likely do not rupture through the pass, our new results suggest that <span class="hlt">large</span> >Mw 7.5 <span class="hlt">earthquakes</span> are possible</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18195642','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18195642"><span>Psychosocial determinants of relocation in survivors of the 1999 <span class="hlt">earthquake</span> in Turkey.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Salcoğlu, Ebru; Başoğlu, Metin; Livanou, Maria</p> <p>2008-01-01</p> <p><span class="hlt">Large-scale</span> <span class="hlt">earthquakes</span> in urban areas displace many people from their homes. This study examined the role of conditioned fears in determining survivors' tendency to live in shelters after the 1999 <span class="hlt">earthquake</span> in Turkey. A total of 1655 survivors living in prefabricated housing compounds or residential units in the epicenter zone were screened using a reliable and valid instrument. Among participants whose houses were rendered uninhabitable during the <span class="hlt">earthquake</span> 87.7% relocated to shelters, whereas others remained in the community by moving to a new house. In contrast, 38.7% of the participants whose houses were still inhabitable after the <span class="hlt">earthquake</span> lived in the shelters. Relocation was predicted by behavioral avoidance, material losses, and loss of relatives. These findings suggested that a multitude of factors played a role in survivors' displacement from their houses and the elevated rates of mental health problems could constitute a cause rather than an effect of relocation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70022764','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70022764"><span><span class="hlt">Earthquake</span> stress drop and laboratory-inferred interseismic strength recovery</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Beeler, N.M.; Hickman, S.H.; Wong, T.-F.</p> <p>2001-01-01</p> <p>We determine the <span class="hlt">scaling</span> relationships between <span class="hlt">earthquake</span> stress drop and recurrence interval tr that are implied by laboratory-measured fault strength. We assume that repeating <span class="hlt">earthquakes</span> can be simulated by stick-slip sliding using a spring and slider block model. Simulations with static/kinetic strength, time-dependent strength, and rate- and state-variable-dependent strength indicate that the relationship between loading velocity and recurrence interval can be adequately described by the power law VL ??? trn, where n=-1. Deviations from n=-1 arise from second order effects on strength, with n>-1 corresponding to apparent time-dependent strengthening and n<-1 corresponding to weakening. Simulations with rate and state-variable equations show that dynamic shear stress drop ????d <span class="hlt">scales</span> with recurrence as d????d/dlntr ??? ??e(b-a), where ??e is the effective normal stress, ??=??/??e, and (a-b)=d??ss/dlnV is the steady-state slip rate dependence of strength. In addition, accounting for seismic energy radiation, we suggest that the static shear stress drop ????s <span class="hlt">scales</span> as d????s/dlntr ??? ??e(1+??)(b-a), where ?? is the fractional overshoot. The variation of ????s with lntr for <span class="hlt">earthquake</span> stress drops is somewhat larger than implied by room temperature laboratory values of ?? and b-a. However, the uncertainty associated with the seismic data is <span class="hlt">large</span> and the discrepancy between the seismic observations and the rate of strengthening predicted by room temperature experiments is less than an order of magnitude. Copyright 2001 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.S21A4408W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.S21A4408W"><span>Characterizing Mega-<span class="hlt">Earthquake</span> Related Tsunami on Subduction Zones without <span class="hlt">Large</span> Historical Events</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Williams, C. R.; Lee, R.; Astill, S.; Farahani, R.; Wilson, P. S.; Mohammed, F.</p> <p>2014-12-01</p> <p>Due to recent <span class="hlt">large</span> tsunami events (e.g., Chile 2010 and Japan 2011), the insurance industry is very aware of the importance of managing its exposure to tsunami risk. There are currently few tools available to help establish policies for managing and pricing tsunami risk globally. As a starting point and to help address this issue, Risk Management Solutions Inc. (RMS) is developing a global suite of tsunami inundation footprints. This dataset will include both representations of historical events as well as a series of M9 scenarios on subductions zones that have not historical generated mega <span class="hlt">earthquakes</span>. The latter set is included to address concerns about the completeness of the historical record for mega <span class="hlt">earthquakes</span>. This concern stems from the fact that the Tohoku Japan <span class="hlt">earthquake</span> was considerably larger than had been observed in the historical record. Characterizing the source and rupture pattern for the subduction zones without historical events is a poorly constrained process. In many case, the subduction zones can be segmented based on changes in the characteristics of the subducting slab or major ridge systems. For this project, the unit sources from the NOAA propagation database are utilized to leverage the basin wide modeling included in this dataset. The length of the rupture is characterized based on subduction zone segmentation and the slip per unit source can be determined based on the event magnitude (i.e., M9) and moment balancing. As these events have not occurred historically, there is little to constrain the slip distribution. Sensitivity tests on the potential rupture pattern have been undertaken comparing uniform slip to higher shallow slip and tapered slip models. Subduction zones examined include the Makran Trench, the Lesser Antilles and the Hikurangi Trench. The ultimate goal is to create a series of tsunami footprints to help insurers understand their exposures at risk to tsunami inundation around the world.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.T31A2832S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.T31A2832S"><span>The Viscoelastic Effect of Triggered <span class="hlt">Earthquakes</span> in Various Tectonic Regions On a Global <span class="hlt">Scale</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sunbul, F.</p> <p>2015-12-01</p> <p>The relation between static stress changes and <span class="hlt">earthquake</span> triggering has important implications for seismic hazard analysis. Considering long time difference between triggered events, viscoelastic stress transfer plays an important role in stress accumulation along the faults. Developing a better understanding of triggering effects may contribute to improvement of quantification of seismic hazard in tectonically active regions. Parsons (2002) computed the difference between the rate of <span class="hlt">earthquakes</span> occurring in regions where shear stress increased and those regions where the shear stress decreased on a global <span class="hlt">scale</span>. He found that 61% of the <span class="hlt">earthquakes</span> occurred in regions with a shear stress increase, while 39% of events occurred in areas of shear stress decrease. Here, we test whether the inclusion of viscoelastic stress transfer affects the results obtained by Parsons (2002) for static stress transfer. Doing such a systematic analysis, we use Global Centroid Moment Tensor (CMT) catalog selecting 289 Ms>7 main shocks with their ~40.500 aftershocks located in ±2° circles for 5 years periods. For the viscoelastic post seismic calculations, we adapt 12 different published rheological models for 5 different tectonic regions. In order to minimise the uncertainties in this CMT catalog, we use the Frohlich and Davis (1999) statistical approach simultaneously. Our results shows that the 5590 aftershocks are triggered by the 289 Ms>7 <span class="hlt">earthquakes</span>. 3419 of them are associated with calculated shear stress increase, while 2171 are associated with shear stress decrease. The summation of viscoelastic stress shows that, of the 5840 events, 3530 are associated with shear stress increases, and 2312 with shear stress decrease. This result shows an average 4.5% increase in total, the rate of increase in positive and negative areas are 3.2% and 6.5%, respectively. Therefore, over long time periods viscoelastic relaxation represents a considerable contribution to the total stress on</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28929481','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28929481"><span>Lessons learned from the total evacuation of a hospital after the 2016 Kumamoto <span class="hlt">Earthquake</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yanagawa, Youichi; Kondo, Hisayoshi; Okawa, Takashi; Ochi, Fumio</p> <p></p> <p>The 2016 Kumamoto <span class="hlt">Earthquakes</span> were a series of <span class="hlt">earthquakes</span> that included a foreshock <span class="hlt">earthquake</span> (magnitude 6.2) on April 14 and a main shock (magnitude 7.0) on April 16, 2016. A number of hospitals in Kumamoto were severely damaged by the two major <span class="hlt">earthquakes</span> and required total evacuation. The authors retrospectively analyzed the activity data of the Disaster Medical Assistance Teams using the Emergency Medical Information System records to investigate the cases in which the total evacuation of a hospital was attempted following the 2016 Kumamoto <span class="hlt">Earthquake</span>. Total evacuation was attempted at 17 hospitals. The evacuation of one of these hospitals was canceled. Most of the hospital buildings were more than 20 years old. The danger of collapse was the most frequent reason for evacuation. Various transportation methods were employed, some of which involved the Japan Ground Self-Defense Force; no preventable deaths occurred during transportation. The hospitals must now be renovated to improve their <span class="hlt">earthquake</span> resistance. The coordinated and combined use of military and civilian resources is beneficial and can significantly reduce human suffering in <span class="hlt">large-scale</span> disasters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S21B0707P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S21B0707P"><span>Systematic detection and classification of <span class="hlt">earthquake</span> clusters in Italy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poli, P.; Ben-Zion, Y.; Zaliapin, I. V.</p> <p>2017-12-01</p> <p>We perform a systematic analysis of spatio-temporal clustering of 2007-2017 <span class="hlt">earthquakes</span> in Italy with magnitudes m>3. The study employs the nearest-neighbor approach of Zaliapin and Ben-Zion [2013a, 2013b] with basic data-driven parameters. The results indicate that seismicity in Italy (an extensional tectonic regime) is dominated by clustered events, with smaller proportion of background events than in California. Evaluation of internal cluster properties allows separation of swarm-like from burst-like seismicity. This classification highlights a strong geographical coherence of cluster properties. Swarm-like seismicity are dominant in regions characterized by relatively slow deformation with possible elevated temperature and/or fluids (e.g. Alto Tiberina, Pollino), while burst-like seismicity are observed in crystalline tectonic regions (Alps and Calabrian Arc) and in Central Italy where moderate to <span class="hlt">large</span> <span class="hlt">earthquakes</span> are frequent (e.g. L'Aquila, Amatrice). To better assess the variation of seismicity style across Italy, we also perform a clustering analysis with region-specific parameters. This analysis highlights clear spatial changes of the threshold separating background and clustered seismicity, and permits better resolution of different clusters in specific geological regions. For example, a <span class="hlt">large</span> proportion of repeaters is found in the Etna region as expected for volcanic-induced seismicity. A similar behavior is observed in the northern Apennines with high pore pressure associated with mantle degassing. The observed variations of <span class="hlt">earthquakes</span> properties highlight shortcomings of practices using <span class="hlt">large-scale</span> average seismic properties, and points to connections between seismicity and local properties of the lithosphere. The observations help to improve the understanding of the physics governing the occurrence of <span class="hlt">earthquakes</span> in different regions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJMPB..3250080L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJMPB..3250080L"><span>The mechanism of <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing</p> <p>2018-03-01</p> <p> strength of crust rocks: The gravitational pressure can initiate the elasticity-plasticity transition in crust rocks. By calculating the depth dependence of elasticity-plasticity transition and according to the actual situation analysis, the behaviors of crust rocks can be categorized in three typical zones: elastic, partially plastic and fully plastic. As the proportion of plastic portion reaches about 10% in the partially plastic zone, plastic interconnection may occur and the variation of shear strength in rocks is mainly characterized by plastic behavior. The equivalent coefficient of friction for the plastic slip is smaller by an order of magnitude, or even less than that for brittle fracture, thus the shear strength of rocks by plastic sliding is much less than that by brittle breaking. Moreover, with increasing depth a number of other factors can further reduce the shear yield strength of rocks. On the other hand, since <span class="hlt">earthquake</span> is a <span class="hlt">large-scale</span> damage, the rock breaking must occur along the weakest path. Therefore, the actual fracture strength of rocks in a shallow <span class="hlt">earthquake</span> is assuredly lower than the average shear strength of rocks as generally observed. The typical distributions of the average strength and actual fracture strength in crustal rocks varying with depth are schematically illustrated. (3) The conditions for <span class="hlt">earthquake</span> occurrence and mechanisms of <span class="hlt">earthquake</span>: An <span class="hlt">earthquake</span> will lead to volume expansion, and volume expansion must break through the obstacle. The condition for an <span class="hlt">earthquake</span> to occur is as follows: the tectonic force exceeds the sum of the fracture strength of rock, the friction force of fault boundary and the resistance from obstacles. Therefore, the shallow <span class="hlt">earthquake</span> is characterized by plastic sliding of rocks that break through the obstacles. Accordingly, four possible patterns for shallow <span class="hlt">earthquakes</span> are put forward. Deep-focus <span class="hlt">earthquakes</span> are believed to result from a wide-range rock flow that breaks the jam. Both shallow</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMNH11A1104S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMNH11A1104S"><span>Understanding <span class="hlt">Earthquake</span> Hazard & Disaster in Himalaya - A Perspective on <span class="hlt">Earthquake</span> Forecast in Himalayan Region of South Central Tibet</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shanker, D.; Paudyal, ,; Singh, H.</p> <p>2010-12-01</p> <p>It is not only the basic understanding of the phenomenon of <span class="hlt">earthquake</span>, its resistance offered by the designed structure, but the understanding of the socio-economic factors, engineering properties of the indigenous materials, local skill and technology transfer models are also of vital importance. It is important that the engineering aspects of mitigation should be made a part of public policy documents. <span class="hlt">Earthquakes</span>, therefore, are and were thought of as one of the worst enemies of mankind. Due to the very nature of release of energy, damage is evident which, however, will not culminate in a disaster unless it strikes a populated area. The word mitigation may be defined as the reduction in severity of something. The <span class="hlt">Earthquake</span> disaster mitigation, therefore, implies that such measures may be taken which help reduce severity of damage caused by <span class="hlt">earthquake</span> to life, property and environment. While “<span class="hlt">earthquake</span> disaster mitigation” usually refers primarily to interventions to strengthen the built environment, and “<span class="hlt">earthquake</span> protection” is now considered to include human, social and administrative aspects of reducing <span class="hlt">earthquake</span> effects. It should, however, be noted that reduction of <span class="hlt">earthquake</span> hazards through prediction is considered to be the one of the effective measures, and much effort is spent on prediction strategies. While <span class="hlt">earthquake</span> prediction does not guarantee safety and even if predicted correctly the damage to life and property on such a <span class="hlt">large</span> <span class="hlt">scale</span> warrants the use of other aspects of mitigation. While <span class="hlt">earthquake</span> prediction may be of some help, mitigation remains the main focus of attention of the civil society. Present study suggests that anomalous seismic activity/ <span class="hlt">earthquake</span> swarm existed prior to the medium size <span class="hlt">earthquakes</span> in the Nepal Himalaya. The mainshocks were preceded by the quiescence period which is an indication for the occurrence of future seismic activity. In all the cases, the identified episodes of anomalous seismic activity were</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S41B0746F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S41B0746F"><span>Long-period ground motions at near-regional distances caused by the PL wave from, inland <span class="hlt">earthquakes</span>: Observation and numerical simulation of the 2004 Mid-Niigata, Japan, Mw6.6 <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Furumura, T.; Kennett, B. L. N.</p> <p>2017-12-01</p> <p>We examine the development of <span class="hlt">large</span>, long-period ground motions at near-regional distances (D=50-200 km) generated by the PL wave from <span class="hlt">large</span>, shallow inland <span class="hlt">earthquakes</span>, based on the analysis of strong motion records and finite-difference method (FDM) simulations of seismic wave propagation. PL wave can be represented as leaking modes of the crustal waveguide and are commonly observed at regional distances between 300 to 1000 km as a dispersed, long-period signal with a dominant period of about 20 s. However, observations of recent <span class="hlt">earthquakes</span> at the dense K-NET and KiK-net strong motion networks in Japan demonstrate the dominance of the PL wave at near-regional (D=50-200 km) distances as, e.g., for the 2004 Mid Niigata, Japan, <span class="hlt">earthquake</span> (Mw6.6; h=13 km). The observed PL wave signal between P and S wave shows a <span class="hlt">large</span>, dispersed wave packet with dominant period of about T=4-10 s with amplitude almost comparable to or larger than the later arrival of the S and surface waves. Thus, the early arrivals of the long-period PL wave immediately after P wave can enhance resonance with <span class="hlt">large-scale</span> constructions such as high-rise buildings and <span class="hlt">large</span> oil-storage tanks etc. with potential for disaster. Such strong effects often occurred during the 2004 Mid Niigata <span class="hlt">earthquakes</span> and other <span class="hlt">large</span> <span class="hlt">earthquakes</span> which occurred nearby the Kanto (Tokyo) basin. FDM simulation of seismic wave propagation employing realistic 3-D sedimentary structure models demonstrates the process by which the PL wave develops at near-regional distances from shallow, crustal <span class="hlt">earthquakes</span> by constructive interference of the P wave in the long-period band. The amplitude of the PL wave is very sensitive to low-velocity structure in the near-surface. Lowered velocities help to develop <span class="hlt">large</span> SV-to-P conversion and weaken the P-to-SV conversion at the free surface. Both effects enhance the multiple P reflections in the crustal waveguide and prevent the leakage of seismic energy into the mantle. However, a very</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S51D..06I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S51D..06I"><span>Tidal controls on <span class="hlt">earthquake</span> size-frequency statistics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ide, S.; Yabe, S.; Tanaka, Y.</p> <p>2016-12-01</p> <p>The possibility that tidal stresses can trigger <span class="hlt">earthquakes</span> is a long-standing issue in seismology. Except in some special cases, a causal relationship between seismicity and the phase of tidal stress has been rejected on the basis of studies using many small events. However, recently discovered deep tectonic tremors are highly sensitive to tidal stress levels, with the relationship being governed by a nonlinear law according to which the tremor rate increases exponentially with increasing stress; thus, slow deformation (and the probability of <span class="hlt">earthquakes</span>) may be enhanced during periods of <span class="hlt">large</span> tidal stress. Here, we show the influence of tidal stress on seismicity by calculating histories of tidal shear stress during the 2-week period before <span class="hlt">earthquakes</span>. Very <span class="hlt">large</span> <span class="hlt">earthquakes</span> tend to occur near the time of maximum tidal stress, but this tendency is not obvious for small <span class="hlt">earthquakes</span>. Rather, we found that tidal stress controls the <span class="hlt">earthquake</span> size-frequency statistics; i.e., the fraction of <span class="hlt">large</span> events increases (i.e. the b-value of the Gutenberg-Richter relation decreases) as the tidal shear stress increases. This correlation is apparent in data from the global catalog and in relatively homogeneous regional catalogues of <span class="hlt">earthquakes</span> in Japan. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. Our findings indicate that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. This finding has clear implications for probabilistic <span class="hlt">earthquake</span> forecasting.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70169139','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70169139"><span>The 7.2 magnitude <span class="hlt">earthquake</span>, November 1975, Island of Hawaii</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p></p> <p>1976-01-01</p> <p>It was centered about 5 km beneath the Kalapana area on the southeastern coast of Hawaii, the largest island of the Hawaiian chain (Fig. 1) and was preceded by numerous foreshocks. The event was accompanied, or followed shortly, by a tsunami, <span class="hlt">large-scale</span> ground movemtns, hundreds of aftershocks, an eruption in the summit caldera of Kilauea Volcano. The <span class="hlt">earthquake</span> and the tsunami it generated produced about 4.1 million dollars in property damage, and the tsumani caused two deaths. Although we have some preliminary findings about the cause and effects of the <span class="hlt">earthquake</span>, detailed scientific investigations will take many more months to complete. This article is condensed from a recent preliminary report (Tillings an others 1976)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998myco.conf..234F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998myco.conf..234F"><span>The <span class="hlt">Large</span> -<span class="hlt">scale</span> Distribution of Galaxies</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Flin, Piotr</p> <p></p> <p>A review of the <span class="hlt">Large-scale</span> structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the <span class="hlt">large</span> <span class="hlt">scale</span> structure of the Universe can be observed in much greater <span class="hlt">scale</span> that it was thought twenty years ago.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/of/2008/1221/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/2008/1221/"><span><span class="hlt">Earthquakes</span> in Ohio and Vicinity 1776-2007</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Dart, Richard L.; Hansen, Michael C.</p> <p>2008-01-01</p> <p>This map summarizes two and a third centuries of <span class="hlt">earthquake</span> activity. The seismic history consists of letters, journals, diaries, and newspaper and scholarly articles that supplement seismograph recordings (seismograms) dating from the early twentieth century to the present. All of the pre-instrumental (historical) <span class="hlt">earthquakes</span> were <span class="hlt">large</span> enough to be felt by people or to cause shaking damage to buildings and their contents. Later, widespread use of seismographs meant that tremors too small or distant to be felt could be detected and accurately located. <span class="hlt">Earthquakes</span> are a legitimate concern in Ohio and parts of adjacent States. Ohio has experienced more than 160 felt <span class="hlt">earthquakes</span> since 1776. Most of these events caused no damage or injuries. However, 15 Ohio <span class="hlt">earthquakes</span> resulted in property damage and some minor injuries. The largest historic <span class="hlt">earthquake</span> in the state occurred in 1937. This event had an estimated magnitude of 5.4 and caused considerable damage in the town of Anna and in several other western Ohio communities. The <span class="hlt">large</span> map shows all historical and instrumentally located <span class="hlt">earthquakes</span> from 1776 through 2007.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70118575','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70118575"><span>The 1909 Taipei <span class="hlt">earthquake</span>: implication for seismic hazard in Taipei</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kanamori, Hiroo; Lee, William H.K.; Ma, Kuo-Fong</p> <p>2012-01-01</p> <p>The 1909 April 14 Taiwan <span class="hlt">earthquake</span> caused significant damage in Taipei. Most of the information on this <span class="hlt">earthquake</span> available until now is from the written reports on its macro-seismic effects and from seismic station bulletins. In view of the importance of this event for assessing the shaking hazard in the present-day Taipei, we collected historical seismograms and station bulletins of this event and investigated them in conjunction with other seismological data. We compared the observed seismograms with those from recent <span class="hlt">earthquakes</span> in similar tectonic environments to characterize the 1909 <span class="hlt">earthquake</span>. Despite the inevitably <span class="hlt">large</span> uncertainties associated with old data, we conclude that the 1909 Taipei <span class="hlt">earthquake</span> is a relatively deep (50–100 km) intraplate <span class="hlt">earthquake</span> that occurred within the subducting Philippine Sea Plate beneath Taipei with an estimated M_W of 7 ± 0.3. Some intraplate events elsewhere in the world are enriched in high-frequency energy and the resulting ground motions can be very strong. Thus, despite its relatively <span class="hlt">large</span> depth and a moderately <span class="hlt">large</span> magnitude, it would be prudent to review the safety of the existing structures in Taipei against <span class="hlt">large</span> intraplate <span class="hlt">earthquakes</span> like the 1909 Taipei <span class="hlt">earthquake</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70015987','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70015987"><span>Acceleration spectra for subduction zone <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Boatwright, J.; Choy, G.L.</p> <p>1989-01-01</p> <p>We estimate the source spectra of shallow <span class="hlt">earthquakes</span> from digital recordings of teleseismic P wave groups, that is, P+pP+sP, by making frequency dependent corrections for the attenuation and for the interference of the free surface. The correction for the interference of the free surface assumes that the <span class="hlt">earthquake</span> radiates energy from a range of depths. We apply this spectral analysis to a set of 12 subduction zone <span class="hlt">earthquakes</span> which range in size from Ms = 6.2 to 8.1, obtaining corrected P wave acceleration spectra on the frequency band from 0.01 to 2.0 Hz. Seismic moment estimates from surface waves and normal modes are used to extend these P wave spectra to the frequency band from 0.001 to 0.01 Hz. The acceleration spectra of <span class="hlt">large</span> subduction zone <span class="hlt">earthquakes</span>, that is, <span class="hlt">earthquakes</span> whose seismic moments are greater than 1027 dyn cm, exhibit intermediate slopes where u(w)???w5/4 for frequencies from 0.005 to 0.05 Hz. For these <span class="hlt">earthquakes</span>, spectral shape appears to be a discontinuous function of seismic moment. Using reasonable assumptions for the phase characteristics, we transform the spectral shape observed for <span class="hlt">large</span> <span class="hlt">earthquakes</span> into the time domain to fit Ekstrom's (1987) moment rate functions for the Ms=8.1 Michoacan <span class="hlt">earthquake</span> of September 19, 1985, and the Ms=7.6 Michoacan aftershock of September 21, 1985. -from Authors</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70164465','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70164465"><span>Measuring the size of an <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Spence, W.</p> <p>1977-01-01</p> <p><span class="hlt">Earthquakes</span> occur in a broad range of sizes. A rock burst in an Idaho silver mine may involve the fracture of 1 meter of rock; the 1965 Rat island <span class="hlt">earthquake</span> in the Aleutian arc involved a 650-kilometer lenght of Earth's crust. <span class="hlt">Earthquakes</span> can be even smaller and even larger. if an <span class="hlt">earthquake</span> is felt or causes perceptible surface damage, then its intesnity of shaking can be subjectively estimated. But many <span class="hlt">large</span> <span class="hlt">earthquakes</span> occur in oceanic area or at great focal depths. These are either simply not felt or their felt pattern does not really indicate their true size. </p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PApGe.174.3751S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PApGe.174.3751S"><span>Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura <span class="hlt">Earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shebalin, P.; Baranov, S.</p> <p>2017-10-01</p> <p>We study aftershock sequences of six major <span class="hlt">earthquakes</span> in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island <span class="hlt">earthquakes</span>. For Kaikaoura <span class="hlt">earthquake</span>, we assess the expected number of long-delayed <span class="hlt">large</span> aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of <span class="hlt">large</span> aftershocks, we presume a break of slope of the magnitude-frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed <span class="hlt">large</span> aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-<span class="hlt">scale</span> processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate <span class="hlt">large</span> events months after the major <span class="hlt">earthquake</span>. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major <span class="hlt">earthquakes</span>. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura <span class="hlt">earthquake</span> zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70023477','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70023477"><span>Triggered <span class="hlt">earthquakes</span> and the 1811-1812 New Madrid, central United States, <span class="hlt">earthquake</span> sequence</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hough, S.E.</p> <p>2001-01-01</p> <p>The 1811-1812 New Madrid, central United States, <span class="hlt">earthquake</span> sequence included at least three events with magnitudes estimated at well above M 7.0. I discuss evidence that the sequence also produced at least three substantial triggered events well outside the New Madrid Seismic Zone, most likely in the vicinity of Cincinnati, Ohio. The largest of these events is estimated to have a magnitude in the low to mid M 5 range. Events of this size are <span class="hlt">large</span> enough to cause damage, especially in regions with low levels of preparedness. Remotely triggered <span class="hlt">earthquakes</span> have been observed in tectonically active regions in recent years, but not previously in stable continental regions. The results of this study suggest, however, that potentially damaging triggered <span class="hlt">earthquakes</span> may be common following <span class="hlt">large</span> mainshocks in stable continental regions. Thus, in areas of low seismic activity such as central/ eastern North America, the hazard associated with localized source zones might be more far reaching than previously recognized. The results also provide additional evidence that intraplate crust is critically stressed, such that small stress changes are especially effective at triggering <span class="hlt">earthquakes</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/fs/2016/3019/fs20163019.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/fs/2016/3019/fs20163019.pdf"><span><span class="hlt">Earthquake</span> forecast for the Wasatch Front region of the Intermountain West</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>DuRoss, Christopher B.</p> <p>2016-04-18</p> <p>The Working Group on Utah <span class="hlt">Earthquake</span> Probabilities has assessed the probability of <span class="hlt">large</span> <span class="hlt">earthquakes</span> in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater <span class="hlt">earthquakes</span> and a 57 percent probability of one or more magnitude 6.0 or greater <span class="hlt">earthquakes</span> in the region in the next 50 years. These results highlight the threat of <span class="hlt">large</span> <span class="hlt">earthquakes</span> in the region.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT........64T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT........64T"><span>Geodetic Imaging of the <span class="hlt">Earthquake</span> Cycle</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tong, Xiaopeng</p> <p></p> <p>In this dissertation I used Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) to recover crustal deformation caused by <span class="hlt">earthquake</span> cycle processes. The studied areas span three different types of tectonic boundaries: a continental thrust <span class="hlt">earthquake</span> (M7.9 Wenchuan, China) at the eastern margin of the Tibet plateau, a mega-thrust <span class="hlt">earthquake</span> (M8.8 Maule, Chile) at the Chile subduction zone, and the interseismic deformation of the San Andreas Fault System (SAFS). A new L-band radar onboard a Japanese satellite ALOS allows us to image high-resolution surface deformation in vegetated areas, which is not possible with older C-band radar systems. In particular, both the Wenchuan and Maule InSAR analyses involved L-band ScanSAR interferometry which had not been attempted before. I integrated a <span class="hlt">large</span> InSAR dataset with dense GPS networks over the entire SAFS. The integration approach features combining the long-wavelength deformation from GPS with the short-wavelength deformation from InSAR through a physical model. The recovered fine-<span class="hlt">scale</span> surface deformation leads us to better understand the underlying <span class="hlt">earthquake</span> cycle processes. The geodetic slip inversion reveals that the fault slip of the Wenchuan <span class="hlt">earthquake</span> is maximum near the surface and decreases with depth. The coseismic slip model of the Maule <span class="hlt">earthquake</span> constrains the down-dip extent of the fault slip to be at 45 km depth, similar to the Moho depth. I inverted for the slip rate on 51 major faults of the SAFS using Green's functions for a 3-dimensional <span class="hlt">earthquake</span> cycle model that includes kinematically prescribed slip events for the past <span class="hlt">earthquakes</span> since the year 1000. A 60 km thick plate model with effective viscosity of 10 19 Pa · s is preferred based on the geodetic and geological observations. The slip rates recovered from the plate models are compared to the half-space model. The InSAR observation reveals that the creeping section of the SAFS is partially locked. This high</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014cosp...40E1375K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014cosp...40E1375K"><span><span class="hlt">Earthquake</span>-Ionosphere Coupling Processes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kamogawa, Masashi</p> <p></p> <p>After a giant <span class="hlt">earthquake</span> (EQ), acoustic and gravity waves are excited by the displacement of land and sea surface, propagate through atmosphere, and then reach thermosphere, which causes ionospheric disturbances. This phenomenon was detected first by ionosonde and by HF Doppler sounderin the 1964 M9.2 Great Alaskan EQ. Developing Global Positioning System (GPS), seismogenic ionospheric disturbance detected by total electron content (TEC) measurement has been reported. A value of TEC is estimated by the phase difference between two different carrier frequencies through the propagation in the dispersive ionospheric plasma. The variation of TEC is mostly similar to that of F-region plasma. Acoustic-gravity waves triggered by an <span class="hlt">earthquake</span> [Heki and Ping, EPSL, 2005; Liu et al., JGR, 2010] and a tsunami [Artu et al., GJI, 2005; Liu et al., JGR, 2006; Rolland, GRL, 2010] disturb the ionosphere and travel in the ionosphere. Besides the traveling ionospheric disturbances, ionospheric disturbances excited by Rayleigh waves [Ducic et al, GRL, 2003; Liu et al., GRL, 2006] as well as post-seismic 4-minute monoperiodic atmospheric resonances [Choosakul et al., JGR, 2009] have been observed after the <span class="hlt">large</span> <span class="hlt">earthquakes</span>. Since GPS Earth Observation Network System (GEONET) with more than 1200 GPS receiving points in Japan is a dense GPS network, seismogenic ionospheric disturbance is spatially observed. In particular, the seismogenic ionospheric disturbance caused by the M9.0 off the Pacific coast of Tohoku EQ (henceforth the Tohoku EQ) on 11 March 2011 was clearly observed. Approximately 9 minutes after the mainshock, acoustic waves which propagated radially emitted from the tsunami source area were observed through the TEC measurement (e. g., Liu et al. [JGR, 2011]). Moreover, there was a depression of TEC lasting for several tens of minutes after a huge <span class="hlt">earthquake</span>, which was a <span class="hlt">large-scale</span> phenomenon extending to a radius of a few hundred kilometers. This TEC depression may be</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1985JGR....90.3589N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1985JGR....90.3589N"><span>Seismic potential for <span class="hlt">large</span> and great interplate <span class="hlt">earthquakes</span> along the Chilean and Southern Peruvian Margins of South America: A quantitative reappraisal</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nishenko, Stuart P.</p> <p>1985-04-01</p> <p>The seismic potential of the Chilean and southern Peruvian margins of South America is reevaluated to delineate those areas or segments of the margin that may be expected to experience <span class="hlt">large</span> or great interplate <span class="hlt">earthquakes</span> within the next 20 years (1984-2004). Long-term estimates of seismic potential (or the conditional probability of recurrence within a specified period of time) are based on (1) statistical analysis of historic repeat time data using Weibull distributions and (2) deterministic estimates of recurrence times based on the time-predictable model of <span class="hlt">earthquake</span> recurrence. Both methods emphasize the periodic nature of <span class="hlt">large</span> and great <span class="hlt">earthquake</span> recurrence, and are compared with estimates of probability based on the assumption of Poisson-type behavior. The estimates of seismic potential presented in this study are long-term forecasts only, as the temporal resolution (or standard deviation) of both methods is taken to range from ±15% to ±25% of the average or estimated repeat time. At present, the Valparaiso region of central Chile (32°-35°S) has a high potential or probability of recurrence in the next 20 years. Coseismic uplift data associated with previous shocks in 1822 and 1906 suggest that this area may have already started to rerupture in 1971-1973. Average repeat times also suggest this area is due for a great shock within the next 20 years. Flanking segments of the Chilean margin, Coquimbo-Illapel (30°-32°S) and Talca-Concepcion (35°-38°S), presently have poorly constrained but possibly quite high potentials for a series of <span class="hlt">large</span> or great shocks within the next 20 years. In contrast, the rupture zone of the great 1960 <span class="hlt">earthquake</span> (37°-46°S) has the lowest potential along the margin and is not expected to rerupture in a great <span class="hlt">earthquake</span> within the next 100 years. In the north, the seismic potentials of the Mollendo-Arica (17°-18°S) and Arica-Antofagasta (18°-24°S) segments (which last ruptured during great <span class="hlt">earthquakes</span> in 1868 and 1877</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19750021772','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19750021772"><span><span class="hlt">Large</span> <span class="hlt">scale</span> dynamic systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Doolin, B. F.</p> <p>1975-01-01</p> <p>Classes of <span class="hlt">large</span> <span class="hlt">scale</span> dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JAESc..62..134R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JAESc..62..134R"><span>Historical <span class="hlt">earthquakes</span> studies in Eastern Siberia: State-of-the-art and plans for future</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Radziminovich, Ya. B.; Shchetnikov, A. A.</p> <p>2013-01-01</p> <p> detailed consideration of each event and distinct logic schemes for data interpretation. Thus, we can make the conclusion regarding the necessity of a <span class="hlt">large-scale</span> revision in historical <span class="hlt">earthquakes</span> catalogues for the area of study.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70161897','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70161897"><span>Post-<span class="hlt">earthquake</span> building safety inspection: Lessons from the Canterbury, New Zealand, <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Marshall, J.; Jaiswal, Kishor; Gould, N.; Turner, F.; Lizundia, B.; Barnes, J.</p> <p>2013-01-01</p> <p>The authors discuss some of the unique aspects and lessons of the New Zealand post-<span class="hlt">earthquake</span> building safety inspection program that was implemented following the Canterbury <span class="hlt">earthquake</span> sequence of 2010–2011. The post-event safety assessment program was one of the largest and longest programs undertaken in recent times anywhere in the world. The effort engaged hundreds of engineering professionals throughout the country, and also sought expertise from outside, to perform post-<span class="hlt">earthquake</span> structural safety inspections of more than 100,000 buildings in the city of Christchurch and the surrounding suburbs. While the building safety inspection procedure implemented was analogous to the ATC 20 program in the United States, many modifications were proposed and implemented in order to assess the <span class="hlt">large</span> number of buildings that were subjected to strong and variable shaking during a period of two years. This note discusses some of the key aspects of the post-<span class="hlt">earthquake</span> building safety inspection program and summarizes important lessons that can improve future <span class="hlt">earthquake</span> response.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19820038823&hterms=neither+deep+shallow&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dneither%2Bdeep%2Bshallow','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19820038823&hterms=neither+deep+shallow&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dneither%2Bdeep%2Bshallow"><span>Shallow moonquakes - How they compare with <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nakamura, Y.</p> <p>1980-01-01</p> <p>Of three types of moonquakes strong enough to be detectable at <span class="hlt">large</span> distances - deep moonquakes, meteoroid impacts and shallow moonquakes - only shallow moonquakes are similar in nature to <span class="hlt">earthquakes</span>. A comparison of various characteristics of moonquakes with those of <span class="hlt">earthquakes</span> indeed shows a remarkable similarity between shallow moonquakes and intraplate <span class="hlt">earthquakes</span>: (1) their occurrences are not controlled by tides; (2) they appear to occur in locations where there is evidence of structural weaknesses; (3) the relative abundances of small and <span class="hlt">large</span> quakes (b-values) are similar, suggesting similar mechanisms; and (4) even the levels of activity may be close. The shallow moonquakes may be quite comparable in nature to intraplate <span class="hlt">earthquakes</span>, and they may be of similar origin.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMNH23B..01A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMNH23B..01A"><span>Making the Handoff from <span class="hlt">Earthquake</span> Hazard Assessments to Effective Mitigation Measures (Invited)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Applegate, D.</p> <p>2010-12-01</p> <p> of <span class="hlt">earthquake</span> scientists and engineers. In addition to the national maps, the USGS produces more detailed urban seismic hazard maps that communities have used to prioritize retrofits and design critical infrastructure that can withstand <span class="hlt">large</span> <span class="hlt">earthquakes</span>. At a regional <span class="hlt">scale</span>, the USGS and its partners in California have developed a time-dependent <span class="hlt">earthquake</span> rupture forecast that is being used by the insurance sector, which can serve to distribute risk and foster mitigation if the right incentives are in place. What the USGS and partners are doing at the urban, regional, and national <span class="hlt">scales</span>, the Global <span class="hlt">Earthquake</span> Model project is seeking to do for the world. A significant challenge for engaging the public to prepare for <span class="hlt">earthquakes</span> is making low-probability, high-consequence events real enough to merit personal action. Scenarios help by starting with the hazard posed by a specific <span class="hlt">earthquake</span> and then exploring the fragility of the built environment, cascading failures, and the real-life consequences for the public. To generate such a complete picture takes multiple disciplines working together. <span class="hlt">Earthquake</span> scenarios are being used both for emergency management exercises and much broader public preparedness efforts like the Great California ShakeOut, which engaged nearly 7 million people.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.T43B0684X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.T43B0684X"><span>Coulomb Failure Stress Accumulation in Nepal After the 2015 Mw 7.8 Gorkha <span class="hlt">Earthquake</span>: Testing <span class="hlt">Earthquake</span> Triggering Hypothesis and Evaluating Seismic Hazards</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiong, N.; Niu, F.</p> <p>2017-12-01</p> <p>A Mw 7.8 <span class="hlt">earthquake</span> struck Gorkha, Nepal, on April 5, 2015, resulting in more than 8000 deaths and 3.5 million homeless. The <span class="hlt">earthquake</span> initiated 70km west of Kathmandu and propagated eastward, rupturing an area of approximately 150km by 60km in size. However, the <span class="hlt">earthquake</span> failed to fully rupture the locked fault beneath the Himalaya, suggesting that the region south of Kathmandu and west of the current rupture are still locked and a much more powerful <span class="hlt">earthquake</span> might occur in future. Therefore, the seismic hazard of the unruptured region is of great concern. In this study, we investigated the Coulomb failure stress (CFS) accumulation on the unruptured fault transferred by the Gorkha <span class="hlt">earthquake</span> and some nearby historical great <span class="hlt">earthquakes</span>. First, we calculated the co-seismic CFS changes of the Gorkha <span class="hlt">earthquake</span> on the nodal planes of 16 <span class="hlt">large</span> aftershocks to quantitatively examine whether they were brought closer to failure by the mainshock. It is shown that at least 12 of the 16 aftershocks were encouraged by an increase of CFS of 0.1-3 MPa. The correspondence between the distribution of off-fault aftershocks and the increased CFS pattern also validates the applicability of the <span class="hlt">earthquake</span> triggering hypothesis in the thrust regime of Nepal. With the validation as confidence, we calculated the co-seismic CFS change on the locked region imparted by the Gorkha <span class="hlt">earthquake</span> and historical great <span class="hlt">earthquakes</span>. A newly proposed ramp-flat-ramp-flat fault geometry model was employed, and the source parameters of historical <span class="hlt">earthquakes</span> were computed with the empirical <span class="hlt">scaling</span> relationship. A broad region south of the Kathmandu and west of the current rupture were shown to be positively stressed with CFS change roughly ranging between 0.01 and 0.5 MPa. The maximum of CFS increase (>1MPa) was found in the updip segment south of the current rupture, implying a high seismic hazard. Since the locked region may be additionally stressed by the post-seismic relaxation of the lower</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PApGe.165..777A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PApGe.165..777A"><span><span class="hlt">Earthquakes</span>: Recurrence and Interoccurrence Times</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.</p> <p>2008-04-01</p> <p>The purpose of this paper is to discuss the statistical distributions of recurrence times of <span class="hlt">earthquakes</span>. Recurrence times are the time intervals between successive <span class="hlt">earthquakes</span> at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a <span class="hlt">scale</span>-invariant hazard function. We consider three sets of characteristic <span class="hlt">earthquakes</span> on the San Andreas fault: (1) The Parkfield <span class="hlt">earthquakes</span>, (2) the sequence of <span class="hlt">earthquakes</span> identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating <span class="hlt">earthquakes</span> at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of <span class="hlt">earthquakes</span> in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of <span class="hlt">earthquakes</span> obtained from a one million year “Virtual California” simulation of San Andreas <span class="hlt">earthquakes</span>. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future <span class="hlt">earthquakes</span> on the San Andreas fault and elsewhere.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1812871B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1812871B"><span>Contribution of the infrasound technology to characterize <span class="hlt">large</span> <span class="hlt">scale</span> atmospheric disturbances and impact on infrasound monitoring</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Blanc, Elisabeth; Le Pichon, Alexis; Ceranna, Lars; Pilger, Christoph; Charlton Perez, Andrew; Smets, Pieter</p> <p>2016-04-01</p> <p>The International Monitoring System (IMS) developed for the verification of the Comprehensive nuclear-Test-Ban Treaty (CTBT) provides a unique global description of atmospheric disturbances generating infrasound such as extreme events (e.g. meteors, volcanoes, <span class="hlt">earthquakes</span>, and severe weather) or human activity (e.g. explosions and supersonic airplanes). The analysis of the detected signals, recorded at global <span class="hlt">scales</span> and over near 15 years at some stations, demonstrates that <span class="hlt">large-scale</span> atmospheric disturbances strongly affect infrasound propagation. Their time <span class="hlt">scales</span> vary from several tens of minutes to hours and days. Their effects are in average well resolved by the current model predictions; however, accurate spatial and temporal description is lacking in both weather and climate models. This study reviews recent results using the infrasound technology to characterize these <span class="hlt">large</span> <span class="hlt">scale</span> disturbances, including (i) wind fluctuations induced by gravity waves generating infrasound partial reflections and modifications of the infrasound waveguide, (ii) convection from thunderstorms and mountain waves generating gravity waves, (iii) stratospheric warming events which yield wind inversions in the stratosphere, (iv)planetary waves which control the global atmospheric circulation. Improved knowledge of these disturbances and assimilation in future models is an important objective of the ARISE (Atmospheric dynamics Research InfraStructure in Europe) project. This is essential in the context of the future verification of the CTBT as enhanced atmospheric models are necessary to assess the IMS network performance in higher resolution, reduce source location errors, and improve characterization methods.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1911530H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1911530H"><span>High resolution measurement of <span class="hlt">earthquake</span> impacts on rock slope stability and damage using pre- and post-<span class="hlt">earthquake</span> terrestrial laser scans</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hutchinson, Lauren; Stead, Doug; Rosser, Nick</p> <p>2017-04-01</p> <p>Understanding the behaviour of rock slopes in response to <span class="hlt">earthquake</span> shaking is instrumental in response and relief efforts following <span class="hlt">large</span> <span class="hlt">earthquakes</span> as well as to ongoing risk management in <span class="hlt">earthquake</span> affected areas. Assessment of the effects of seismic shaking on rock slope kinematics requires detailed surveys of the pre- and post-<span class="hlt">earthquake</span> condition of the slope; however, at present, there is a lack of high resolution monitoring data from pre- and post-<span class="hlt">earthquake</span> to facilitate characterization of seismically induced slope damage and validate models used to back-analyze rock slope behaviour during and following <span class="hlt">earthquake</span> shaking. Therefore, there is a need for additional research where pre- and post- <span class="hlt">earthquake</span> monitoring data is available. This paper presents the results of a direct comparison between terrestrial laser scans (TLS) collected in 2014, the year prior to the 2015 <span class="hlt">earthquake</span> sequence, with that collected 18 months after the <span class="hlt">earthquakes</span> and two monsoon cycles. The two datasets were collected using Riegl VZ-1000 and VZ-4000 full waveform laser scanners with high resolution (c. 0.1 m point spacing as a minimum). The scans cover the full landslide affected slope from the toe to the crest. The slope is located in Sindhupalchok District, Central Nepal which experienced some of the highest co-seismic and post-seismic landslide intensities across Nepal due to the proximity to the epicenters (<20 km) of both of the main aftershocks on April 26, 2015 (M 6.7) and May 12, 2015 (M7.3). During the 2015 <span class="hlt">earthquakes</span> and subsequent 2015 and 2016 monsoons, the slope experienced rockfall and debris flows which are evident in satellite imagery and field photographs. Fracturing of the rock mass associated with the seismic shaking is also evident at <span class="hlt">scales</span> not accessible through satellite and field observations. The results of change detection between the TLS datasets with an emphasis on quantification of seismically-induced slope damage is presented. Patterns in the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21568564','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21568564"><span>Transition from <span class="hlt">large-scale</span> to small-<span class="hlt">scale</span> dynamo.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ponty, Y; Plunian, F</p> <p>2011-04-15</p> <p>The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time <span class="hlt">scales</span>, plus turbulent fluctuations at short time <span class="hlt">scales</span>. The dynamo onset is controlled by the long time <span class="hlt">scales</span> of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect <span class="hlt">scales</span> as O(Rm(-1)), suggesting the takeover of small-<span class="hlt">scale</span> dynamo action. This is confirmed by simulations in which dynamo occurs even if the <span class="hlt">large-scale</span> field is artificially suppressed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70158981','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70158981"><span>Passive seismic monitoring of natural and induced <span class="hlt">earthquakes</span>: case studies, future directions and socio-economic relevance</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Bohnhoff, Marco; Dresen, Georg; Ellsworth, William L.; Ito, Hisao; Cloetingh, Sierd; Negendank, Jörg</p> <p>2010-01-01</p> <p>An important discovery in crustal mechanics has been that the Earth’s crust is commonly stressed close to failure, even in tectonically quiet areas. As a result, small natural or man-made perturbations to the local stress field may trigger <span class="hlt">earthquakes</span>. To understand these processes, Passive Seismic Monitoring (PSM) with seismometer arrays is a widely used technique that has been successfully applied to study seismicity at different magnitude levels ranging from acoustic emissions generated in the laboratory under controlled conditions, to seismicity induced by hydraulic stimulations in geological reservoirs, and up to great <span class="hlt">earthquakes</span> occurring along plate boundaries. In all these environments the appropriate deployment of seismic sensors, i.e., directly on the rock sample, at the earth’s surface or in boreholes close to the seismic sources allows for the detection and location of brittle failure processes at sufficiently low magnitude-detection threshold and with adequate spatial resolution for further analysis. One principal aim is to develop an improved understanding of the physical processes occurring at the seismic source and their relationship to the host geologic environment. In this paper we review selected case studies and future directions of PSM efforts across a wide range of <span class="hlt">scales</span> and environments. These include induced failure within small rock samples, hydrocarbon reservoirs, and natural seismicity at convergent and transform plate boundaries. Each example represents a milestone with regard to bridging the gap between laboratory-<span class="hlt">scale</span> experiments under controlled boundary conditions and <span class="hlt">large-scale</span> field studies. The common motivation for all studies is to refine the understanding of how <span class="hlt">earthquakes</span> nucleate, how they proceed and how they interact in space and time. This is of special relevance at the larger end of the magnitude <span class="hlt">scale</span>, i.e., for <span class="hlt">large</span> devastating <span class="hlt">earthquakes</span> due to their severe socio-economic impact.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUFMPA24A..03B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUFMPA24A..03B"><span>Urban <span class="hlt">Earthquakes</span> - Reducing Building Collapse Through Education</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bilham, R.</p> <p>2004-12-01</p> <p>Fatalities from <span class="hlt">earthquakes</span> rose from 6000k to 9000k/year in the past decade, yet the ratio of numbers of <span class="hlt">earthquake</span> fatalities to instantaneous population continues to fall. Since 1950 the ratio declined worldwide by a factor of three, but in some countries the ratio has changed little. E.g in Iran, 1 in 3000 people can expect to die in an <span class="hlt">earthquake</span>, a percentage that has not changed significantly since 1890. Fatalities from <span class="hlt">earthquakes</span> remain high in those countries that have traditionally suffered from frequent <span class="hlt">large</span> <span class="hlt">earthquakes</span> (Turkey, Iran, Japan, and China), suggesting that the exposure time of recently increased urban populations in other countries may be too short to have interacted with <span class="hlt">earthquakes</span> with long recurrence intervals. This in turn, suggests that disasters of unprecendented size will occur (more than 1 million fatalities) when future <span class="hlt">large</span> <span class="hlt">earthquakes</span> occur close to megacities. However, population growth is most rapid in cities of less than 1 million people in the developing nations, where the financial ability to implement <span class="hlt">earthquake</span> resistant construction methods is limited. In that structural collapse can often be traced to ignorance about the forces at work in an <span class="hlt">earthquake</span>, the future collapse of buildings presently under construction could be much reduced were contractors, builders and occupants educated in the principles of <span class="hlt">earthquake</span> resistant assembly. Education of builders who are tempted to cut assembly costs is likely to be more cost effective than material aid.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EaSci..26..301E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EaSci..26..301E"><span>Continuous-cyclic variations in the b-value of the <span class="hlt">earthquake</span> frequency-magnitude distribution</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>El-Isa, Z. H.</p> <p>2013-10-01</p> <p>Seismicity of the Earth ( M ≥ 4.5) was compiled from NEIC, IRIS and ISC catalogues and used to compute b-value based on various time windows. It is found that continuous cyclic b-variations occur on both long and short time <span class="hlt">scales</span>, the latter being of much higher value and sometimes in excess of 0.7 of the absolute b-value. These variations occur not only yearly or monthly, but also daily. Before the occurrence of <span class="hlt">large</span> <span class="hlt">earthquakes</span>, b-values start increasing with variable gradients that are affected by foreshocks. In some cases, the gradient is reduced to zero or to a negative value a few days before the <span class="hlt">earthquake</span> occurrence. In general, calculated b-values attain maxima 1 day before <span class="hlt">large</span> <span class="hlt">earthquakes</span> and minima soon after their occurrence. Both linear regression and maximum likelihood methods give correlatable, but variable results. It is found that an expanding time window technique from a fixed starting point is more effective in the study of b-variations. The calculated b-variations for the whole Earth, its hemispheres, quadrants and the epicentral regions of some <span class="hlt">large</span> <span class="hlt">earthquakes</span> are of both local and regional character, which may indicate that in such cases, the geodynamic processes acting within a certain region have a much regional effect within the Earth. The b-variations have long been known to vary with a number of local and regional factors including tectonic stresses. The results reported here indicate that geotectonic stress remains the most significant factor that controls b-variations. It is found that for <span class="hlt">earthquakes</span> with M w ≥ 7, an increase of about 0.20 in the b-value implies a stress increase that will result in an <span class="hlt">earthquake</span> with a magnitude one unit higher.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26555497','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26555497"><span>Psychometric properties of the Haitian Creole version of the Resilience <span class="hlt">Scale</span> with a sample of adult survivors of the 2010 <span class="hlt">earthquake</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cénat, Jude Mary; Derivois, Daniel; Hébert, Martine; Eid, Patricia; Mouchenik, Yoram</p> <p>2015-11-01</p> <p>Resilience is defined as the ability of people to cope with disasters and significant life adversities. The present paper aims to investigate the underlying structure of the Creole version of the Resilience <span class="hlt">Scale</span> and its psychometric properties using a sample of adult survivors of the 2010 <span class="hlt">earthquake</span>. A parallel analysis was conducted to determine the number of factors to extract and confirmatory factor analysis was performed using a sample of 1355 adult survivors of the 2010 <span class="hlt">earthquake</span> from people of specific places where <span class="hlt">earthquake</span> occurred with an average age of 31.57 (SD=14.42). All participants completed the Creole version of Resilience <span class="hlt">Scale</span> (RS), the Impact of Event <span class="hlt">Scale</span> Revised (IES-R), the Beck Depression Inventory (BDI) and the Social Support Questionnaire (SQQ-6). To facilitate exploratory (EFA) and confirmatory factor analysis (CFA), the sample was divided into two subsamples (subsample 1 for EFA and subsample 2 for CFA). Parallel analysis and confirmatory factor analysis results showed a good-fit 3-factor structure. The Cronbach α coefficient was .79, .74 and .72 respectively for the factor 1, 2 and 3 and correlated to each other. Construct validity of the Resilience <span class="hlt">scale</span> was provided by significant correlation with measures of depression and social support satisfaction, but no correlation was found with posttraumatic stress disorder measure, except for factor 2. The results reveal a different factorial structure including 25 items of the RS. However, the Haitian Creole version of RS is a valid and reliable measure for assessing resilience for adults in Haiti. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.2250L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.2250L"><span>Gravity drives Great <span class="hlt">Earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lister, Gordon; Forster, Marnie</p> <p>2010-05-01</p> <p> of the over-riding crust and mantle. This is possible for the crust and mantle above major subduction zones is mechanically weakened by the flux of heat and water associated with subduction zone processes. In consequence the lithosphere of the over-riding orogens can act more like a fluid than a rigid plate. Such fluid-like behaviour has been noted for the Himalaya and for the crust of the uplifted adjacent Tibetan Plateau, which appear to be collapsing. Similar conclusions as to the fluid-like behaviour of an orogen can also be reached for the crust and mantle of Myanmar and Indonesia, since here again, there is evidence for arc-normal motion adjacent to rolling-back subduction zones. Prior to the Great Sumatran <span class="hlt">Earthquake</span> of 2004 we had postulated such movements on geological time-<span class="hlt">scales</span>, describing them as ‘surges‘ driven by the gravitational potential energy of the adjacent orogen. But we considered time-<span class="hlt">scales</span> that were very different to those that apply in the lead up, or during and subsequent to a catastrophic seismic event. The Great Sumatran <span class="hlt">Earthquake</span> taught us quite differently. Data from satellites support the hypothesis that extension took place in a discrete increment, which we interpret to be the result of a gravitationally driven surge of the Indonesian crust westward over the weakened rupture during and after the <span class="hlt">earthquake</span>. Mode II megathrusts are tsunamigenic for one very simple reason: the crust has been attenuated as the result of ongoing extension, so they can be overlain by <span class="hlt">large</span> tracts of water, and they have a long rupture run time, allowing a succession of stress accumulations to be harvested. The after-slip beneath the Andaman Sea was also significant (in terms of moment) although non-seismogenic in its character. Operation of a Mode II megathrust prior to catastrophic failure may involve relatively quiescent motion with a mixture of normal faults and reverse faults, much like south of Java today. Ductile yield may produce steadily</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24191019','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24191019"><span>Gas injection may have triggered <span class="hlt">earthquakes</span> in the Cogdell oil field, Texas.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gan, Wei; Frohlich, Cliff</p> <p>2013-11-19</p> <p>Between 1957 and 1982, water flooding was conducted to improve petroleum production in the Cogdell oil field north of Snyder, TX, and a contemporary analysis concluded this induced <span class="hlt">earthquakes</span> that occurred between 1975 and 1982. The National <span class="hlt">Earthquake</span> Information Center detected no further activity between 1983 and 2005, but between 2006 and 2011 reported 18 <span class="hlt">earthquakes</span> having magnitudes 3 and greater. To investigate these <span class="hlt">earthquakes</span>, we analyzed data recorded by six temporary seismograph stations deployed by the USArray program, and identified 93 well-recorded <span class="hlt">earthquakes</span> occurring between March 2009 and December 2010. Relocation with a double-difference method shows that most <span class="hlt">earthquakes</span> occurred within several northeast-southwest-trending linear clusters, with trends corresponding to nodal planes of regional focal mechanisms, possibly indicating the presence of previously unidentified faults. We have evaluated data concerning injection and extraction of oil, water, and gas in the Cogdell field. Water injection cannot explain the 2006-2011 <span class="hlt">earthquakes</span>, especially as net volumes (injection minus extraction) are significantly less than in the 1957-1982 period. However, since 2004 significant volumes of gases including supercritical CO2 have been injected into the Cogdell field. The timing of gas injection suggests it may have contributed to triggering the recent seismic activity. If so, this represents an instance where gas injection has triggered <span class="hlt">earthquakes</span> having magnitudes 3 and larger. Further modeling studies may help evaluate recent assertions suggesting significant risks accompany <span class="hlt">large-scale</span> carbon capture and storage as a strategy for managing climate change.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1816873S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1816873S"><span>Rheological behavior of the crust and mantle in subduction zones in the time-<span class="hlt">scale</span> range from <span class="hlt">earthquake</span> (minute) to mln years inferred from thermomechanical model and geodetic observations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sobolev, Stephan; Muldashev, Iskander</p> <p>2016-04-01</p> <p>The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-<span class="hlt">scale</span> phenomenon with the time-<span class="hlt">scales</span> spanning from geological to <span class="hlt">earthquake</span> <span class="hlt">scale</span> with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-<span class="hlt">scale</span> thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-<span class="hlt">scale</span> including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous <span class="hlt">earthquake</span> sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the <span class="hlt">earthquake</span> to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) <span class="hlt">earthquakes</span>. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku <span class="hlt">Earthquake</span> for the day-to-4year time range.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/pp/1204a/report.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/pp/1204a/report.pdf"><span>Landslides from the February 4, 1976, Guatemala <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Harp, Edwin L.; Wilson, Raymond C.; Wieczorek, Gerald F.</p> <p>1981-01-01</p> <p>The M (Richter magnitude) = 7.5 Guatemala <span class="hlt">earthquake</span> of February 4, 1976, generated more than 10,000 landslides throughout an area of approximately 16,000 km2. These landslides caused hundreds of fatalities as well as extensive property damage. Landslides disrupted both highways and the railroad system and thus severely hindered early rescue efforts. In Guatemala City, extensive property damage and loss of life were due to ground failure beneath dwellings built too close to the edges of steeply incised canyons. We have recorded the distribution of landslides from this <span class="hlt">earthquake</span> by mapping individual slides at a <span class="hlt">scale</span> of 1:50,000 for most of the landslide-affected area, using high-altitude aerial photography. The highest density of landslides was in the highlands west of Guatemala City. The predominant types of <span class="hlt">earthquake</span>-triggered landslides were rock falls and debris slides of less than 15,000 m3 volume; in addition to these smaller landslides, 11 <span class="hlt">large</span> landslides had volumes of more than 100,000 m3. Several of these <span class="hlt">large</span> landslides posed special hazards to people and property from lakes impounded by the landslide debris and from the ensuing floods that occurred upon breaching and rapid erosion of the debris. The regional landslide distribution was observed to depend on five major factors: (1) seismic intensity; (2) lithology: 90 percent of all landslides were within Pleistocene pumice deposits; (3) slope steepness; (4) topographic amplification of seismic ground motion; and (5) regional fractures. The presence of preearthquake landslides had no apparent effect on the landslide distribution, and landslide concentration in the Guatemala City area does not correlate with local seismic-intensity data. The landslide concentration, examined at this <span class="hlt">scale</span>, appears to be governed mainly by lithologic differences within the pumice deposits, preexisting fractures, and amplification of ground motion by topography-all factors related to site conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70119141','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70119141"><span>Bayesian historical <span class="hlt">earthquake</span> relocation: an example from the 1909 Taipei <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Minson, Sarah E.; Lee, William H.K.</p> <p>2014-01-01</p> <p>Locating <span class="hlt">earthquakes</span> from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both <span class="hlt">large</span> phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the <span class="hlt">earthquake</span>. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei <span class="hlt">earthquake</span>. While our epicentre location and origin time for the 1909 Taipei <span class="hlt">earthquake</span> are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27885027','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27885027"><span>Mega-<span class="hlt">earthquakes</span> rupture flat megathrusts.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bletery, Quentin; Thomas, Amanda M; Rempel, Alan W; Karlstrom, Leif; Sladen, Anthony; De Barros, Louis</p> <p>2016-11-25</p> <p>The 2004 Sumatra-Andaman and 2011 Tohoku-Oki <span class="hlt">earthquakes</span> highlighted gaps in our understanding of mega-<span class="hlt">earthquake</span> rupture processes and the factors controlling their global distribution: A fast convergence rate and young buoyant lithosphere are not required to produce mega-<span class="hlt">earthquakes</span>. We calculated the curvature along the major subduction zones of the world, showing that mega-<span class="hlt">earthquakes</span> preferentially rupture flat (low-curvature) interfaces. A simplified analytic model demonstrates that heterogeneity in shear strength increases with curvature. Shear strength on flat megathrusts is more homogeneous, and hence more likely to be exceeded simultaneously over <span class="hlt">large</span> areas, than on highly curved faults. Copyright © 2016, American Association for the Advancement of Science.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060047689','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060047689"><span><span class="hlt">Large-Scale</span> Hybrid Motor Testing. Chapter 10</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Story, George</p> <p>2006-01-01</p> <p>Hybrid rocket motors can be successfully demonstrated at a small <span class="hlt">scale</span> virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory <span class="hlt">scale</span> motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for <span class="hlt">large</span> <span class="hlt">scale</span> applications are - how do they <span class="hlt">scale</span> and has it been shown in a <span class="hlt">large</span> motor? To answer those questions, <span class="hlt">large</span> <span class="hlt">scale</span> motor testing is required to verify the hybrid motor at its true size. The necessity to conduct <span class="hlt">large-scale</span> hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small <span class="hlt">scale</span> hybrid data to that of larger <span class="hlt">scale</span> data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the <span class="hlt">large</span>, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and <span class="hlt">scaling</span> concepts that went into the development of those <span class="hlt">large</span> motors.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMNH51C..01M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMNH51C..01M"><span>Facts about the Eastern Japan Great <span class="hlt">Earthquake</span> of March 2011</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moriyama, T.</p> <p>2011-12-01</p> <p>The 2011 great <span class="hlt">earthquake</span> was a magnitude 9.0 Mw undersea megathrust <span class="hlt">earthquake</span> off the coast of Japan that occurred early morning UTC on Friday, 11 March 2011, with the epicenter approximately 70 kilometres east of the Oshika Peninsula of Tohoku and the hypocenter at an underwater depth of approximately 32 km. It was the most powerful known <span class="hlt">earthquake</span> to have hit Japan, and one of the five most powerful <span class="hlt">earthquakes</span> in the world overall since modern record keeping began in 1900. The <span class="hlt">earthquake</span> triggered extremely destructive tsunami waves of up to 38.9 metres that struck Tohoku Japan, in some cases traveling up to 10 km inland. In addition to loss of life and destruction of infrastructure, the tsunami caused a number of nuclear accidents, primarily the ongoing level 7 meltdowns at three reactors in the Fukushima I Nuclear Power Plant complex, and the associated evacuation zones affecting hundreds of thousands of residents. The Japanese National Police Agency has confirmed 1,5457 deaths, 5,389 injured, and 7,676 people missing across eighteen prefectures, as well as over 125,000 buildings damaged or destroyed. JAXA carried out ALOS emergency observation just after the <span class="hlt">earthquake</span> occured, and acquired more than 400 scenes over the disaster area. The coseismic interferogram by InSAR analysis cleary showing the epicenter of the <span class="hlt">earthquake</span> and land surface deformation over Tohoku area. By comparison of before and after satellite images, the <span class="hlt">large</span> <span class="hlt">scale</span> damaged area by tunami are extracted. These images and data can access via JAXA website and also GEO Tohoku oki event supersite website.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1910607S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1910607S"><span>Fully probabilistic <span class="hlt">earthquake</span> source inversion on teleseismic <span class="hlt">scales</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stähler, Simon; Sigloch, Karin</p> <p>2017-04-01</p> <p>Seismic source inversion is a non-linear problem in seismology where not just the <span class="hlt">earthquake</span> parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a <span class="hlt">large</span> number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of <span class="hlt">earthquake</span> mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic <span class="hlt">earthquake</span> source</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22184228','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22184228"><span>Global risk of big <span class="hlt">earthquakes</span> has not recently increased.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shearer, Peter M; Stark, Philip B</p> <p>2012-01-17</p> <p>The recent elevated rate of <span class="hlt">large</span> <span class="hlt">earthquakes</span> has fueled concern that the underlying global rate of <span class="hlt">earthquake</span> activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of <span class="hlt">large</span> (magnitude M≥7) <span class="hlt">earthquakes</span> from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 <span class="hlt">earthquakes</span> has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller <span class="hlt">earthquakes</span> is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences--if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of <span class="hlt">large</span> events. Together these facts suggest that the global risk of <span class="hlt">large</span> <span class="hlt">earthquakes</span> is no higher today than it has been in the past.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3271898','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3271898"><span>Global risk of big <span class="hlt">earthquakes</span> has not recently increased</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shearer, Peter M.; Stark, Philip B.</p> <p>2012-01-01</p> <p>The recent elevated rate of <span class="hlt">large</span> <span class="hlt">earthquakes</span> has fueled concern that the underlying global rate of <span class="hlt">earthquake</span> activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of <span class="hlt">large</span> (magnitude M≥7) <span class="hlt">earthquakes</span> from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 <span class="hlt">earthquakes</span> has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller <span class="hlt">earthquakes</span> is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences—if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of <span class="hlt">large</span> events. Together these facts suggest that the global risk of <span class="hlt">large</span> <span class="hlt">earthquakes</span> is no higher today than it has been in the past. PMID:22184228</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23036648','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23036648"><span>Why small-<span class="hlt">scale</span> cannabis growers stay small: five mechanisms that prevent small-<span class="hlt">scale</span> growers from going <span class="hlt">large</span> <span class="hlt">scale</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy</p> <p>2012-11-01</p> <p>Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-<span class="hlt">scale</span>. In this study, we explore the factors that prevent small-<span class="hlt">scale</span> growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a <span class="hlt">large-scale</span> and 35 on a small-<span class="hlt">scale</span>. The study identifies five mechanisms that prevent small-<span class="hlt">scale</span> indoor growers from going <span class="hlt">large-scale</span>. First, <span class="hlt">large-scale</span> operations involve a number of people, <span class="hlt">large</span> sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a <span class="hlt">large</span> 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell <span class="hlt">large</span> quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, <span class="hlt">large-scale</span> operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-<span class="hlt">scale</span> cultivation. Fifth, small-<span class="hlt">scale</span> growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up <span class="hlt">large-scale</span> production will imply having to renegotiate or abandon these values. Going from small- to <span class="hlt">large-scale</span> cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-<span class="hlt">scale</span> growers face and the lack of interest and motivation for going <span class="hlt">large-scale</span> suggest that the risk of a 'slippery slope' from small-<span class="hlt">scale</span> to <span class="hlt">large-scale</span> growing is limited. Possible political implications of the findings are discussed. Copyright</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMPA42B..06A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMPA42B..06A"><span><span class="hlt">Large</span> Historical Tsunamigenic <span class="hlt">Earthquakes</span> in Italy: The Neglected Tsunami Research Point of View</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.</p> <p>2015-12-01</p> <p>It is known that tsunamis are rather rare events, especially when compared to <span class="hlt">earthquakes</span>, and the Italian coasts are no exception. Nonetheless, a striking evidence is that 6 out of 10 <span class="hlt">earthquakes</span> occurred in the last thousand years in Italy, and having equivalent moment magnitude equal or larger than 7 where accompanied by destructive or heavily damaging tsunamis. If we extend the lower limit of the equivalent moment magnitude down to 6.5 the percentage decreases (around 40%), but is still significant. Famous events like those occurred on 30 July 1627 in Gargano, on 11 January 1693 in eastern Sicily, and on 28 December 1908 in the Messina Straits are part of this list: they were all characterized by maximum run-ups of several meters (13 m for the 1908 tsunami), significant maximum inundation distances, and <span class="hlt">large</span> (although not precisely quantifiable) numbers of victims. Further evidences provided in the last decade by paleo-tsunami deposit analyses help to better characterize the tsunami impact and confirm that none of the cited events can be reduced to local or secondary effects. Proper analysis and simulation of available tsunami data would then appear as an obvious part of the correct definition of the sources responsible for the largest Italian tsunamigenic <span class="hlt">earthquakes</span>, in a process in which different datasets analyzed by different disciplines must be reconciled rather than put into contrast with each other. Unfortunately, macroseismic, seismic and geological/geomorphological observations and data typically are assigned much heavier weights, and in-land faults are often assigned larger credit than the offshore ones, even when evidence is provided by tsunami simulations that they are not at all capable of justifying the observed tsunami effects. Tsunami generation is imputed a-priori to only supposed, and sometimes even non-existing, submarine landslides. We try to summarize the tsunami research point of view on the largest Italian historical tsunamigenic</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.1387S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.1387S"><span>Fault Zone Permeability Decrease Following <span class="hlt">Large</span> <span class="hlt">Earthquakes</span> in a Hydrothermal System</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shi, Zheming; Zhang, Shouchuan; Yan, Rui; Wang, Guangcai</p> <p>2018-02-01</p> <p>Seismic wave shaking-induced permeability enhancement in the shallow crust has been widely observed. Permeability decrease, however, is seldom reported. In this study, we document coseismic discharge and temperature decrease in a hot spring following the 1996 Lijiang Mw 7.0 and the 2004 Mw 9.0 <span class="hlt">earthquakes</span> in the Balazhang geothermal field. We use three different models to constrain the permeability change and the mechanism of coseismic discharge decrease, and we use an end-member mixing model for the coseismic temperature change. Our results show that the <span class="hlt">earthquake</span>-induced permeability decrease in the fault zone reduced the recharge from deep hot water, which may be the mechanism that explains the coseismic discharge and temperature responses. The changes in the hot spring response reflect the dynamic changes in the hydrothermal system; in the future, the <span class="hlt">earthquake</span>-induced permeability decrease should be considered when discussing controls on permeability.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26551120','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26551120"><span>Generation of <span class="hlt">Large-Scale</span> Magnetic Fields by Small-<span class="hlt">Scale</span> Dynamo in Shear Flows.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Squire, J; Bhattacharjee, A</p> <p>2015-10-23</p> <p>We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-<span class="hlt">scale</span> dynamo drive the generation of <span class="hlt">large-scale</span> magnetic fields. This is in stark contrast to the common idea that small-<span class="hlt">scale</span> magnetic fields should be harmful to <span class="hlt">large-scale</span> dynamo action. These dynamos occur in the presence of a <span class="hlt">large-scale</span> velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-<span class="hlt">scale</span> magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of <span class="hlt">large-scale</span> magnetic fields across a wide range of astrophysical objects.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70156107','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70156107"><span>Is there a basis for preferring characteristic <span class="hlt">earthquakes</span> over a Gutenberg–Richter distribution in probabilistic <span class="hlt">earthquake</span> forecasting?</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Parsons, Thomas E.; Geist, Eric L.</p> <p>2009-01-01</p> <p>The idea that faults rupture in repeated, characteristic <span class="hlt">earthquakes</span> is central to most probabilistic <span class="hlt">earthquake</span> forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic <span class="hlt">earthquake</span> rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic <span class="hlt">earthquakes</span> against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for <span class="hlt">earthquake</span> forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical <span class="hlt">earthquake</span> catalogs enables <span class="hlt">large-earthquake</span>-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.G12A..02B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.G12A..02B"><span>Real-Time <span class="hlt">Earthquake</span> Analysis for Disaster Mitigation (READI) Network</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bock, Y.</p> <p>2014-12-01</p> <p>Real-time GNSS networks are making a significant impact on our ability to forecast, assess, and mitigate the effects of geological hazards. I describe the activities of the Real-time <span class="hlt">Earthquake</span> Analysis for Disaster Mitigation (READI) working group. The group leverages 600+ real-time GPS stations in western North America operated by UNAVCO (PBO network), Central Washington University (PANGA), US Geological Survey & Scripps Institution of Oceanography (SCIGN project), UC Berkeley & US Geological Survey (BARD network), and the Pacific Geosciences Centre (WCDA project). Our goal is to demonstrate an <span class="hlt">earthquake</span> and tsunami early warning system for western North America. Rapid response is particularly important for those coastal communities that are in the near-source region of <span class="hlt">large</span> <span class="hlt">earthquakes</span> and may have only minutes of warning time, and who today are not adequately covered by existing seismic and basin-wide ocean-buoy monitoring systems. The READI working group is performing comparisons of independent real time analyses of 1 Hz GPS data for station displacements and is participating in government-sponsored <span class="hlt">earthquake</span> and tsunami exercises in the Western U.S. I describe a prototype seismogeodetic system using a cluster of southern California stations that includes GNSS tracking and collocation with MEMS accelerometers for real-time estimation of seismic velocity and displacement waveforms, which has advantages for improved <span class="hlt">earthquake</span> early warning and tsunami forecasts compared to seismic-only or GPS-only methods. The READI working group's ultimate goal is to participate in an Indo-Pacific Tsunami early warning system that utilizes GNSS real-time displacements and ionospheric measurements along with seismic, near-shore buoys and ocean-bottom pressure sensors, where available, to rapidly estimate magnitude and finite fault slip models for <span class="hlt">large</span> <span class="hlt">earthquakes</span>, and then forecast tsunami source, energy <span class="hlt">scale</span>, geographic extent, inundation and runup. This will require</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70022975','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70022975"><span>Central US <span class="hlt">earthquake</span> catalog for hazard maps of Memphis, Tennessee</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Wheeler, R.L.; Mueller, C.S.</p> <p>2001-01-01</p> <p>An updated version of the catalog that was used for the current national probabilistic seismic-hazard maps would suffice for production of <span class="hlt">large-scale</span> hazard maps of the Memphis urban area. Deaggregation maps provide guidance as to the area that a catalog for calculating Memphis hazard should cover. For the future, the Nuttli and local network catalogs could be examined for <span class="hlt">earthquakes</span> not presently included in the catalog. Additional work on aftershock removal might reduce hazard uncertainty. Graphs of decadal and annual <span class="hlt">earthquake</span> rates suggest completeness at and above magnitude 3 for the last three or four decades. Any additional work on completeness should consider the effects of rapid, local population changes during the Nation's westward expansion. ?? 2001 Elsevier Science B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S43F..07R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S43F..07R"><span>Analysis of P and Pdiff Coda Arrivals for Water Reverberations to Evaluate Shallow Slip Extent in <span class="hlt">Large</span> Megathrust <span class="hlt">Earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rhode, A.; Lay, T.</p> <p>2017-12-01</p> <p>Determining the up-dip rupture extent of <span class="hlt">large</span> megathrust ruptures is important for understanding their tsunami excitation, frictional properties of the shallow megathrust, and potential for separate tsunami <span class="hlt">earthquake</span> occurrence. On land geodetic data have almost no resolution of the up-dip extent of faulting and teleseismic observations have limited resolution that is strongly influenced by typically poorly known shallow seismic velocity structure near the toe of the accretionary prism. The increase in ocean depth as slip on the megathrust approaches the trench has significant influence on the strength and azimuthal distribution of water reverberations in the far-field P wave coda. For broadband P waves from <span class="hlt">large</span> <span class="hlt">earthquakes</span> with dominant signal periods of about 10 s, water reverberations generated by shallow fault slip under deep water may persist for over a minute after the direct P phases have passed, giving a clear signal of slip near the trench. As the coda waves can be quickly evaluated following the P signal, recognition of slip extending to the trench and associated enhanced tsunamigenic potential could be achieved within a few minutes after the P arrival, potentially contributing to rapid tsunami hazard assessment. We examine the broadband P wave coda at distances from 80 to 120° for a <span class="hlt">large</span> number of recent major and great <span class="hlt">earthquakes</span> with independently determined slip distributions and known tsunami excitation to evaluate the prospect for rapidly constraining up-dip rupture extent of <span class="hlt">large</span> megathrust <span class="hlt">earthquakes</span>. Events known to have significant shallow slip, at least locally extending to the trench (e.g., 2016 Illapel, Chile; 2010 Maule, 2010 Mentawai) do have relatively enhanced coda levels at all azimuths, whereas events that do not rupture the shallow megathrust (e.g., 2007 Sumatra, 2014 Iquique, 2003 Hokkaido) do not. Some events with slip models lacking shallow slip show strong coda generation, raising questions about the up-dip resolution of</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70037209','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70037209"><span>On near-source <span class="hlt">earthquake</span> triggering</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Parsons, T.; Velasco, A.A.</p> <p>2009-01-01</p> <p>When one <span class="hlt">earthquake</span> triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise <span class="hlt">earthquake</span> rates, it is difficult to isolate them. We thus compare explosions with <span class="hlt">earthquakes</span> because only <span class="hlt">earthquakes</span> cause significant static stress changes. We find that <span class="hlt">large</span> explosions at the Nevada Test Site do not trigger <span class="hlt">earthquakes</span> at rates comparable to similar magnitude <span class="hlt">earthquakes</span>. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and <span class="hlt">earthquakes</span> deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near <span class="hlt">large</span> explosions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70112253','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70112253"><span>Photointerpretation of Alaskan post-<span class="hlt">earthquake</span> photography</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hackman, R.J.</p> <p>1965-01-01</p> <p>Aerial photographs taken after the March 27, 1964, Good Friday, Alaskan <span class="hlt">earthquake</span> were examined stereoscopically to determine effects of the <span class="hlt">earthquake</span> in areas remote from the towns, highways, and the railroad. The two thousand black and white photographs used in this study were taking in April, after the <span class="hlt">earthquake</span>, by the U. S. Coast & Geodetic Survey and were generously supplied to the U. S. Geological Survey. Part of the photographs, at a <span class="hlt">scale</span> of 1/24,000, provide blanket coverage of approximately 2,000 square miles of land area north and west of Prince William Sound, including parts of the mainland and some of the adjacent islands. The epicenter of the <span class="hlt">earthquake</span>, near the head of Unakwik Inlet, is located in this area. The rest of the photographs, at <span class="hlt">scales</span> ranging from 1/17,000 to 1/40,000, cover isolated strips of the coastline of the mainland and nearby islands in the general area of Prince William Sound. Figure 1 shows the area of new photo coverage used in this study. The objective of the study was to determine quickly whether geological features resulting from the <span class="hlt">earthquake</span>, such as faults, changes in shoreline, cracks in surficial material, pressure ridges in lake ice, fractures in glaciers and lake ice, and rock slides and avalanches, might be identifiable by photointerpretation. The study was made without benefit of comparisons with older, or pre-<span class="hlt">earthquake</span> photography, which was not readily available for immediate use.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMIN32A..06B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMIN32A..06B"><span>Seismogeodesy for rapid <span class="hlt">earthquake</span> and tsunami characterization</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bock, Y.</p> <p>2016-12-01</p> <p>Rapid estimation of <span class="hlt">earthquake</span> magnitude and fault mechanism is critical for <span class="hlt">earthquake</span> and tsunami warning systems. Traditionally, the monitoring of <span class="hlt">earthquakes</span> and tsunamis has been based on seismic networks for estimating <span class="hlt">earthquake</span> magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of <span class="hlt">earthquake</span> onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict <span class="hlt">earthquake</span> early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude <span class="hlt">scaling</span> relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/13843','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/13843"><span><span class="hlt">Large</span> <span class="hlt">Scale</span> Traffic Simulations</span></a></p> <p><a target="_blank" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1997-01-01</p> <p><span class="hlt">Large</span> <span class="hlt">scale</span> microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030400','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030400"><span><span class="hlt">Large</span> rock avalanches triggered by the M 7.9 Denali Fault, Alaska, <span class="hlt">earthquake</span> of 3 November 2002</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Jibson, R.W.; Harp, E.L.; Schulz, W.; Keefer, D.K.</p> <p>2006-01-01</p> <p>The moment magnitude (M) 7.9 Denali Fault, Alaska, <span class="hlt">earthquake</span> of 3 November 2002 triggered thousands of landslides, primarily rock falls and rock slides, that ranged in volume from rock falls of a few cubic meters to rock avalanches having volumes as great as 20 ?? 106 m3. The pattern of landsliding was unusual: the number and concentration of triggered slides was much less than expected for an <span class="hlt">earthquake</span> of this magnitude, and the landslides were concentrated in a narrow zone about 30-km wide that straddled the fault-rupture zone over its entire 300-km length. Despite the overall sparse landslide concentration, the <span class="hlt">earthquake</span> triggered several <span class="hlt">large</span> rock avalanches that clustered along the western third of the rupture zone where acceleration levels and ground-shaking frequencies are thought to have been the highest. Inferences about near-field strong-shaking characteristics drawn from interpretation of the landslide distribution are strikingly consistent with results of recent inversion modeling that indicate that high-frequency energy generation was greatest in the western part of the fault-rupture zone and decreased markedly to the east. ?? 2005 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNH21C0177H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNH21C0177H"><span>Discussion of New Approaches to Medium-Short-Term <span class="hlt">Earthquake</span> Forecast in Practice of The <span class="hlt">Earthquake</span> Prediction in Yunnan</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hong, F.</p> <p>2017-12-01</p> <p>After retrospection of years of practice of the <span class="hlt">earthquake</span> prediction in Yunnan area, it is widely considered that the fixed-point <span class="hlt">earthquake</span> precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of <span class="hlt">earthquakes</span>, however it is difficult to obtain the spatial relevance between <span class="hlt">earthquakes</span> and precursory anomalies, thus we can hardly predict the spatial locations of <span class="hlt">earthquakes</span> using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predicting <span class="hlt">earthquakes</span> locations, resulting from the increased seismicity were observed before 80% M=6.0 <span class="hlt">earthquakes</span> in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predicting <span class="hlt">earthquakes</span> locations in recent year, for instance, the forecasted <span class="hlt">earthquakes</span> occurring time and area derived form the 1-year-<span class="hlt">scale</span> geomagnetic anomalies before the M6.5 Ludian <span class="hlt">earthquake</span> in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term <span class="hlt">earthquake</span> forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small <span class="hlt">earthquakes</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1259593-generation-large-scale-magnetic-fields-small-scale-dynamo-shear-flows','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1259593-generation-large-scale-magnetic-fields-small-scale-dynamo-shear-flows"><span>Generation of <span class="hlt">large-scale</span> magnetic fields by small-<span class="hlt">scale</span> dynamo in shear flows</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Squire, J.; Bhattacharjee, A.</p> <p>2015-10-20</p> <p>We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-<span class="hlt">scale</span> dynamo drive the generation of <span class="hlt">large-scale</span> magnetic fields. This is in stark contrast to the common idea that small-<span class="hlt">scale</span> magnetic fields should be harmful to <span class="hlt">large-scale</span> dynamo action. These dynamos occur in the presence of a <span class="hlt">large-scale</span> velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-<span class="hlt">scale</span> magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of <span class="hlt">large-scale</span> magnetic fields across a wide range of astrophysical objects.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.T41B2887A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.T41B2887A"><span>The Northern Rupture of the 1762 Arakan Meghathrust <span class="hlt">Earthquake</span> and other Potential <span class="hlt">Earthquake</span> Sources in Bangladesh.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Akhter, S. H.; Seeber, L.; Steckler, M. S.</p> <p>2015-12-01</p> <p>Bangladesh is one of the most densely populated countries in the world. It occupies a major part of the Bengal Basin, which contains the Ganges-Brahmaputra Delta (GBD), the largest and one of the most active of world deltas, and is located along the Alpine-Himalayan seismic belt. As such it is vulnerable to many natural hazards, especially <span class="hlt">earthquakes</span>. The country sits at the junction of three tectonic plates - Indian, Eurasian, and the Burma 'sliver' of the Sunda plate. These form two boundaries where plates converge- the India-Eurasia plate boundary to the north forming the Himalaya Arc and the India-Burma plate boundary to the east forming the Indo-Burma Arc. The India-Burma plate boundary is exceptionally wide because collision with the GBD feeds an exception amount of sediment into the subduction zone. Thus the Himalayan continent collision orogeny along with its syntaxes to the N and NE of Bangladesh and the Burma Arc subduction boundary surround Bangladesh on two sides with active faults of regional <span class="hlt">scale</span>, raising the potential for high-magnitude <span class="hlt">earthquakes</span>. In recent years Bangladesh has experienced minor to moderate <span class="hlt">earthquakes</span>. Historical records show that major and great <span class="hlt">earthquakes</span> have ravaged the country and the neighboring region several times over the last 450 years. Field observations of Tertiary structures along the Chittagong-Teknaf coast reveal that the rupture of 1762 Arakan megathrust <span class="hlt">earthquake</span> extended as far north as the Sitakund anticline to the north of the city of Chittagong. This <span class="hlt">earthquake</span> brought changes to the landscape, uplifting the Teknaf peninsula and St. Martin's Island by about 2-2.5 m, and activated two mud volcanos along the axis of the Sitakund anticline, where <span class="hlt">large</span> tabular blocks of exotic crystalline limestone, were tectonically transported from a deep-seated formation along with the eruptive mud. Vast area of the coast including inland areas east of the lower Meghna River were inundated. More than 500 peoples died near</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S43A0828B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S43A0828B"><span>Automatic <span class="hlt">Earthquake</span> Detection by Active Learning</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bergen, K.; Beroza, G. C.</p> <p>2017-12-01</p> <p>In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of <span class="hlt">large</span>, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either <span class="hlt">earthquakes</span> or noise. <span class="hlt">Earthquake</span> seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak <span class="hlt">earthquake</span> signals in <span class="hlt">large</span> seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of <span class="hlt">earthquake</span> waveforms, but the number of known events, taken from <span class="hlt">earthquake</span> catalogs, may be insufficient to build an accurate detector. Furthermore, <span class="hlt">earthquake</span> catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, <span class="hlt">earthquakes</span>, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic <span class="hlt">earthquake</span> detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve <span class="hlt">earthquake</span> detection performance with limited available training data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.T21D2851S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.T21D2851S"><span>Energy-to-Moment ratios for Deep <span class="hlt">Earthquakes</span>: No evidence for scofflaws</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saloor, N.; Okal, E. A.</p> <p>2015-12-01</p> <p>Energy-to-moment ratios can provide information on the distribution of seismic source spectrum between high and low frequencies, and thus identify anomalous events (either "slow" or "snappy") whose source violates seismic <span class="hlt">scaling</span> laws, the former characteristic of the so-called tsunami <span class="hlt">earthquakes</span> (e.g., Mentawai, 2010), the latter featuring enhanced acceleration and destruction (e.g., Christchurch, 2011). We extend to deep <span class="hlt">earthquakes</span> the concept of the slowness paramete, Θ=log10EE/M0, introduced by Newman and Okal [1998], where the estimated energy EE is computed for an average focal mechanism and depth (in the range 300-690 km). We find that only minor modifications of the algorithm are necessary to adapt it to deep <span class="hlt">earthquakes</span>. The analysis of a dataset of 160 deep <span class="hlt">earthquakes</span> from the past 30 years show that these events <span class="hlt">scale</span> with an average Θ=-4.34±0.31, corresponding to slightly greater strain release than for their shallow counterparts. However, the most important result to date is that we have not found any "outliers", i.e., violating this trend by one or more logarithmic units, as was the case for the slow events at shallow depths. This indicates that the processes responsible for such variations in energy distribution in the source spectrum of shallow <span class="hlt">earthquakes</span>, are absent from their deep counterparts, suggesting, perhaps not unexpectedly, that the deep seismogenic zones feature more homogeneous properties than shallow ones. This includes the <span class="hlt">large</span> event of 30 May 2015 below the Bonin Islands (Θ=-4.13), which took place both deeper than, and oceanwards of, the otherwise documented Wadati-Benioff Zone.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMNH23A1551S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMNH23A1551S"><span>An investigation on seismo-ionospheric precursors in various <span class="hlt">earthquake</span> zones</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Su, Y.; Liu, J. G.; Chen, M.</p> <p>2011-12-01</p> <p>Y. C. Su1, J. Y. Liu1 and M. Q. Chen1 1Institute of Space Science, National Central University, Chung-Li,Taiwan. This paper examines the relationships between the ionosphere and <span class="hlt">earthquakes</span> occurring in different <span class="hlt">earthquake</span> zones e.g. Malaysia area, Tibet plateau, mid-ocean ridge, Andes, etc., to reveal the possible seismo-ionospheric precursors for these area. Because the lithology, focal mechanism of <span class="hlt">earthquakes</span> and electrodynamics in the ionosphere at different area are different, it is probable to have diverse ionospheric reactions before <span class="hlt">large</span> <span class="hlt">earthquakes</span> occurring in these areas. In addition to statistical analyses on increase or decrease anomalies of the ionospheric electron density few days before <span class="hlt">large</span> <span class="hlt">earthquakes</span>, we focus on the seismo-ionospheric precursors for oceanic and land <span class="hlt">earthquakes</span> as well as for <span class="hlt">earthquakes</span> with different focal mechanisms.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1875c0011G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1875c0011G"><span>Assessment of precast beam-column using capacity demand response spectrum subject to design basis <span class="hlt">earthquake</span> and maximum considered <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghani, Kay Dora Abd.; Tukiar, Mohd Azuan; Hamid, Nor Hayati Abdul</p> <p>2017-08-01</p> <p>Malaysia is surrounded by the tectonic feature of the Sumatera area which consists of two seismically active inter-plate boundaries, namely the Indo-Australian and the Eurasian Plates on the west and the Philippine Plates on the east. Hence, Malaysia experiences tremors from far distant <span class="hlt">earthquake</span> occurring in Banda Aceh, Nias Island, Padang and other parts of Sumatera Indonesia. In order to predict the safety of precast buildings in Malaysia under near field ground motion the response spectrum analysis could be used for dealing with future <span class="hlt">earthquake</span> whose specific nature is unknown. This paper aimed to develop of capacity demand response spectrum subject to Design Basis <span class="hlt">Earthquake</span> (DBE) and Maximum Considered <span class="hlt">Earthquake</span> (MCE) in order to assess the performance of precast beam column joint. From the capacity-demand response spectrum analysis, it can be concluded that the precast beam-column joints would not survive when subjected to <span class="hlt">earthquake</span> excitation with surface-wave magnitude, Mw, of more than 5.5 <span class="hlt">Scale</span> Richter (Type 1 spectra). This means that the beam-column joint which was designed using the current code of practice (BS8110) would be severely damaged when subjected to high <span class="hlt">earthquake</span> excitation. The capacity-demand response spectrum analysis also shows that the precast beam-column joints in the prototype studied would be severely damaged when subjected to Maximum Considered <span class="hlt">Earthquake</span> (MCE) with PGA=0.22g having a surface-wave magnitude of more than 5.5 <span class="hlt">Scale</span> Richter, or Type 1 spectra.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMNH43D..01K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMNH43D..01K"><span>Geological, Geophysical, and Stochastic Factors in Nepal's Gorkha <span class="hlt">Earthquake</span>-Triggered Landslide Distribution</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kargel, J. S.; Shugar, D. H.; Haritashya, U. K.; Leonard, G. J.; Fielding, E. J.; Hudnut, K. W.; Jibson, R.; Collins, B. D.</p> <p>2015-12-01</p> <p>On 25 April 2015, a magnitude 7.8 <span class="hlt">earthquake</span> struck Nepal. Subsequently many <span class="hlt">large</span> aftershocks shook the region, including one of magnitude 7.3. Much damage and over 4300 landslides were triggered. The landslides were mapped by a volunteer group who self organized to undertake an emergency response to the <span class="hlt">earthquake</span> disaster. The number of landslides is fewer than expected based on total released seismic energy. This may be because of lack of a surface rupture and possibly also because of high surface-wave attenuation due to rugged surface topography or to the geological and geophysical characteristics of the upper crust. The observed landslides were primarily in the southern half of the Himalaya in areas where the steepest slopes occur and where peak ground accelerations were relatively high. The landslides are also concentrated on the tectonically downdropped block. However, the distribution is complex and varies dramatically from valley to valley. Furthermore, different types of landslides are concentrated in different geologic materials, which suggests local factors control the valley-<span class="hlt">scale</span> attenuation or amplification of seismic waves or the way wave disturbances couple to the local geologic materials. Across the <span class="hlt">earthquake</span>-affected zone on the regional <span class="hlt">scale</span>, wave attenuation and also net downdrop and uplift may also explain as much about the distribution of landslides as slopes and distance from <span class="hlt">large</span> slips on the fault. We will offer the regional distribution results and some specific case studies to illustrate a set of possible controlling factors.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1919283M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1919283M"><span>Structural controls on the <span class="hlt">large</span> landslides triggered by the 14 November 2016, MW 7.8 <span class="hlt">Earthquake</span>, Kaikoura, New Zealand</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Massey, Chris</p> <p>2017-04-01</p> <p>The Kaikoura <span class="hlt">earthquake</span> generated tens of thousands of landslides over a total area of about 10,000 km2, with the majority concentrated in a smaller area of about 3,500 km2. A noteworthy aspect of this event is the <span class="hlt">large</span> number of landslides that occurred on the steep coastal cliffs south of Ward and extending to Oaro, north of Christchurch, which led to the closure of state highway routes. Another noteworthy feature of this <span class="hlt">earthquake</span> is the <span class="hlt">large</span> number (more than 190) of valley blocking landslides it generated. This was partly due to the presence of steep and confined slopes in areas of strong ground shaking. The largest valley blocking landslide has an approximate volume of 12(±2) M m3 and the debris travelled about 2.7 km down slope forming a dam on the Hapuku River. Given the sparse population in the vicinity of the landslides, only a few homes were impacted and there were no recorded deaths due to landslides. However, the long-term stability of cracked slopes and landslide "dams" from future strong <span class="hlt">earthquakes</span> and significant rain events are an ongoing concern to central and local government agencies responsible for rebuilding homes and infrastructure. A particular concern is the potential for debris floods to affect downstream residences and infrastructure should some of the landslide dams breach catastrophically. The mapped landslide distribution reflects the complexity of the <span class="hlt">earthquake</span> rupture—at least 13 faults ruptured to the ground surface or sea floor. The majority of landslides occurred in two geological and geotechnically distinct materials: Neogene sedimentary rocks (sandstones, limestones and siltstones) where first-time and reactivated rock-slides were the dominant landslide type, and Torlesse "basement" rocks (greywacke sandstones and argillite) where first-time rock and debris avalanches dominated. The largest landslides triggered by the <span class="hlt">earthquake</span> are located either on or adjacent to faults that ruptured to the ground surface and so they</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhyEd..43..136M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhyEd..43..136M"><span>The physics of an <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McCloskey, John</p> <p>2008-03-01</p> <p>The Sumatra-Andaman <span class="hlt">earthquake</span> of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the <span class="hlt">scale</span> of the devastation and the likelihood of another equally destructive <span class="hlt">earthquake</span> set out a series of challenges of how we might use science not only to understand the <span class="hlt">earthquake</span> and its aftermath but also to help in planning for future <span class="hlt">earthquakes</span> in the region. In this article a brief account of these efforts is presented. <span class="hlt">Earthquake</span> prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future <span class="hlt">earthquake</span> are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another <span class="hlt">earthquake</span> occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016GeoRL..4312036D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016GeoRL..4312036D"><span>Statistical tests of simple <span class="hlt">earthquake</span> cycle models</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>DeVries, Phoebe M. R.; Evans, Eileen L.</p> <p>2016-12-01</p> <p>A central goal of observing and modeling the <span class="hlt">earthquake</span> cycle is to forecast when a particular fault may generate an <span class="hlt">earthquake</span>: a fault late in its <span class="hlt">earthquake</span> cycle may be more likely to generate an <span class="hlt">earthquake</span> than a fault early in its <span class="hlt">earthquake</span> cycle. Models that can explain geodetic observations throughout the entire <span class="hlt">earthquake</span> cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified <span class="hlt">earthquake</span> models for strike-slip faults have <span class="hlt">largely</span> focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic <span class="hlt">earthquake</span> cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a <span class="hlt">large</span> subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted <span class="hlt">earthquake</span> cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=scale&pg=3&id=EJ1025453','ERIC'); return false;" href="https://eric.ed.gov/?q=scale&pg=3&id=EJ1025453"><span>Toward Increasing Fairness in Score <span class="hlt">Scale</span> Calibrations Employed in International <span class="hlt">Large-Scale</span> Assessments</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Oliveri, Maria Elena; von Davier, Matthias</p> <p>2014-01-01</p> <p>In this article, we investigate the creation of comparable score <span class="hlt">scales</span> across countries in international assessments. We examine potential improvements to current score <span class="hlt">scale</span> calibration procedures used in international <span class="hlt">large-scale</span> assessments. Our approach seeks to improve fairness in scoring international <span class="hlt">large-scale</span> assessments, which often…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=electric+AND+machines&pg=6&id=ED265841','ERIC'); return false;" href="https://eric.ed.gov/?q=electric+AND+machines&pg=6&id=ED265841"><span>Very <span class="hlt">Large</span> <span class="hlt">Scale</span> Integration (VLSI).</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Yeaman, Andrew R. J.</p> <p></p> <p>Very <span class="hlt">Large</span> <span class="hlt">Scale</span> Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-<span class="hlt">scale</span> VLSI implementation can occur, certain salient factors must be…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28153409','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28153409"><span><span class="hlt">Large</span> <span class="hlt">scale</span> centrifuge test of a geomembrane-lined landfill subject to waste settlement and seismic loading.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kavazanjian, Edward; Gutierrez, Angel</p> <p>2017-10-01</p> <p>A <span class="hlt">large</span> <span class="hlt">scale</span> centrifuge test of a geomembrane-lined landfill subject to waste settlement and seismic loading was conducted to help validate a numerical model for performance based design of geomembrane liner systems. The test was conducted using the 240g-ton centrifuge at the University of California at Davis under the U.S. National Science Foundation Network for <span class="hlt">Earthquake</span> Engineering Simulation Research (NEESR) program. A 0.05mm thin film membrane was used to model the liner. The waste was modeled using a peat-sand mixture. The side slope membrane was underlain by lubricated low density polyethylene to maximize the difference between the interface shear strength on the top and bottom of the geomembrane and the induced tension in it. Instrumentation included thin film strain gages to monitor geomembrane strains and accelerometers to monitor seismic excitation. The model was subjected to an input design motion intended to simulate strong ground motion from the 1994 Hyogo-ken Nanbu <span class="hlt">earthquake</span>. Results indicate that downdrag waste settlement and seismic loading together, and possibly each phenomenon individually, can induce potentially damaging tensile strains in geomembrane liners. The data collected from this test is publically available and can be used to validate numerical models for the performance of geomembrane liner systems. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JGRB..121.3586B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JGRB..121.3586B"><span>Bayesian probabilities for Mw 9.0+ <span class="hlt">earthquakes</span> in the Aleutian Islands from a regionally <span class="hlt">scaled</span> global rate</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Butler, Rhett; Frazer, L. Neil; Templeton, William J.</p> <p>2016-05-01</p> <p>We use the global rate of Mw ≥ 9.0 <span class="hlt">earthquakes</span>, and standard Bayesian procedures, to estimate the probability of such mega events in the Aleutian Islands, where they pose a significant risk to Hawaii. We find that the probability of such an <span class="hlt">earthquake</span> along the Aleutians island arc is 6.5% to 12% over the next 50 years (50% credibility interval) and that the annualized risk to Hawai'i is about $30 M. Our method (the regionally <span class="hlt">scaled</span> global rate method or RSGR) is to <span class="hlt">scale</span> the global rate of Mw 9.0+ events in proportion to the fraction of global subduction (units of area per year) that takes place in the Aleutians. The RSGR method assumes that Mw 9.0+ events are a Poisson process with a rate that is both globally and regionally stationary on the time <span class="hlt">scale</span> of centuries, and it follows the principle of Burbidge et al. (2008) who used the product of fault length and convergence rate, i.e., the area being subducted per annum, to <span class="hlt">scale</span> the Poisson rate for the GSS to sections of the Indonesian subduction zone. Before applying RSGR to the Aleutians, we first apply it to five other regions of the global subduction system where its rate predictions can be compared with those from paleotsunami, paleoseismic, and geoarcheology data. To obtain regional rates from paleodata, we give a closed-form solution for the probability density function of the Poisson rate when event count and observation time are both uncertain.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <footer><a id="backToTop" href="#top"> </a><nav><a id="backToTop" href="#top"> </a><ul class="links"><a id="backToTop" href="#top"> </a><li><a id="backToTop" href="#top"></a><a href="/sitemap.html">Site Map</a></li> <li><a href="/members/index.html">Members Only</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://doe.responsibledisclosure.com/hc/en-us" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> <div class="small">Science.gov is maintained by the U.S. Department of Energy's <a href="https://www.osti.gov/" target="_blank">Office of Scientific and Technical Information</a>, in partnership with <a href="https://www.cendi.gov/" target="_blank">CENDI</a>.</div> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>