Sample records for earthquake probabilities wgcep

  1. Future WGCEP Models and the Need for Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2008-12-01

    The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).

  2. Conditional, Time-Dependent Probabilities for Segmented Type-A Faults in the WGCEP UCERF 2

    USGS Publications Warehouse

    Field, Edward H.; Gupta, Vipin

    2008-01-01

    This appendix presents elastic-rebound-theory (ERT) motivated time-dependent probabilities, conditioned on the date of last earthquake, for the segmented type-A fault models of the 2007 Working Group on California Earthquake Probabilities (WGCEP). These probabilities are included as one option in the WGCEP?s Uniform California Earthquake Rupture Forecast 2 (UCERF 2), with the other options being time-independent Poisson probabilities and an ?Empirical? model based on observed seismicity rate changes. A more general discussion of the pros and cons of all methods for computing time-dependent probabilities, as well as the justification of those chosen for UCERF 2, are given in the main body of this report (and the 'Empirical' model is also discussed in Appendix M). What this appendix addresses is the computation of conditional, time-dependent probabilities when both single- and multi-segment ruptures are included in the model. Computing conditional probabilities is relatively straightforward when a fault is assumed to obey strict segmentation in the sense that no multi-segment ruptures occur (e.g., WGCEP (1988, 1990) or see Field (2007) for a review of all previous WGCEPs; from here we assume basic familiarity with conditional probability calculations). However, and as we?ll see below, the calculation is not straightforward when multi-segment ruptures are included, in essence because we are attempting to apply a point-process model to a non point process. The next section gives a review and evaluation of the single- and multi-segment rupture probability-calculation methods used in the most recent statewide forecast for California (WGCEP UCERF 1; Petersen et al., 2007). We then present results for the methodology adopted here for UCERF 2. We finish with a discussion of issues and possible alternative approaches that could be explored and perhaps applied in the future. A fault-by-fault comparison of UCERF 2 probabilities with those of previous studies is given in the

  3. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the

  4. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  5. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  6. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  7. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  8. Assessing Lay Understanding of Common Presentations of Earthquake Hazard Information

    NASA Astrophysics Data System (ADS)

    Thompson, K. J.; Krantz, D. H.

    2010-12-01

    The Working Group on California Earthquake Probabilities (WGCEP) includes, in its introduction to earthquake rupture forecast maps, the assertion that "In daily living, people are used to making decisions based on probabilities -- from the flip of a coin (50% probability of heads) to weather forecasts (such as a 30% chance of rain) to the annual chance of being killed by lightning (about 0.0003%)." [3] However, psychology research identifies a large gap between lay and expert perception of risk for various hazards [2], and cognitive psychologists have shown in numerous studies [1,4-6] that people neglect, distort, misjudge, or misuse probabilities, even when given strong guidelines about the meaning of numerical or verbally stated probabilities [7]. The gap between lay and expert use of probability needs to be recognized more clearly by scientific organizations such as WGCEP. This study undertakes to determine how the lay public interprets earthquake hazard information, as presented in graphical map form by the Uniform California Earthquake Rupture Forecast (UCERF), compiled by the WGCEP and other bodies including the USGS and CGS. It also explores alternate ways of presenting hazard data, to determine which presentation format most effectively translates information from scientists to public. Participants both from California and from elsewhere in the United States are included, to determine whether familiarity -- either with the experience of an earthquake, or with the geography of the forecast area -- affects people's ability to interpret an earthquake hazards map. We hope that the comparisons between the interpretations by scientific experts and by different groups of laypeople will both enhance theoretical understanding of factors that affect information transmission and assist bodies such as the WGCEP in their laudable attempts to help people prepare themselves and their communities for possible natural hazards. [1] Kahneman, D & Tversky, A (1979). Prospect

  9. WGCEP Historical California Earthquake Catalog

    USGS Publications Warehouse

    Felzer, Karen R.; Cao, Tianqing

    2008-01-01

    This appendix provides an earthquake catalog for California and the surrounding area. Our goal is to provide a listing for all known M > 5.5 earthquakes that occurred from 1850-1932 and all known M > 4.0 earthquakes that occurred from 1932-2006 within the region of 31.0 to 43.0 degrees North and -126.0 to -114.0 degrees West. Some pre-1932 earthquakes 4 5, before the Northern California network was online. Some earthquakes from 1900-1932, and particularly from 1910-1932 are also based on instrumental readings, but the quality of the instrumental record and the resulting analysis are much less precise than for later listings. A partial exception is for some of the largest earthquakes, such as the San Francisco earthquake of April 18, 1906, for which global teleseismic records (Wald et al. 1993) and geodetic measurements (Thatcher et al. 1906) have been used to help determine magnitudes.

  10. Evidence for surface rupture in 1868 on the Hayward Fault in North Oakland and major rupturing in prehistoric earthquakes

    NASA Astrophysics Data System (ADS)

    Lienkaemper, James J.; Williams, Patrick L.

    1999-07-01

    WGCEP90 estimated the Hayward fault to have a high probability (0.45 in 30 yr) of producing a future M7 Bay Area earthquake. This was based on a generic recurrence time and an unverified segmentation model, because there were few direct observations for the southern fault and none for the northern Hayward fault. To better constrain recurrence and segmentation of the northern Hayward fault, we trenched in north Oakland. Unexpectedly, we observed evidence of surface rupture probably from the M7 1868 earthquake. This extends the limit of that surface rupture 13 km north of the segmentation boundary used in the WGCEP90 model and forces serious re-evaluation of the current two-segment paradigm. Although we found that major prehistoric ruptures have occurred here, we could not radiocarbon date them. However, the last major prehistoric event appears correlative with a recently recognized event 13 km to the north dated AD 1640-1776.

  11. Evidence for surface rupture in 1868 on the Hayward fault in north Oakland and major rupturing in prehistoric earthquakes

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.

    1999-01-01

    WGCEP90 estimated the Hayward fault to have a high probability (0.45 in 30 yr) of producing a future M7 Bay Area earthquake. This was based on a generic recurrence time and an unverified segmentation model, because there were few direct observations for the southern fault and none for the northern Hayward fault. To better constrain recurrence and segmentation of the northern Hayward fault, we trenched in north Oakland. Unexpectedly, we observed evidence of surface rupture probably from the M7 1868 earthquake. This extends the limit of that surface rupture 13 km north of the segmentation boundary used in the WGCEP90 model and forces serious re-evaluation of the current two-segment paradigm. Although we found that major prehistoric ruptures have occurred here, we could not radiocarbon date them. However, the last major prehistoric event appears correlative with a recently recognized event 13 km to the north dated AD 1640-1776. Copyright 1999 by the American Geophysical Union.

  12. Long‐term time‐dependent probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua

    2015-01-01

    The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the

  13. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    USGS Publications Warehouse

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  14. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  15. A post-Tohoku earthquake review of earthquake probabilities in the Southern Kanto District, Japan

    NASA Astrophysics Data System (ADS)

    Somerville, Paul G.

    2014-12-01

    The 2011 Mw 9.0 Tohoku earthquake generated an aftershock sequence that affected a large part of northern Honshu, and has given rise to widely divergent forecasts of changes in earthquake occurrence probabilities in northern Honshu. The objective of this review is to assess these forecasts as they relate to potential changes in the occurrence probabilities of damaging earthquakes in the Kanto Region. It is generally agreed that the 2011 Mw 9.0 Tohoku earthquake increased the stress on faults in the southern Kanto district. Toda and Stein (Geophys Res Lett 686, 40: doi:10.1002, 2013) further conclude that the probability of earthquakes in the Kanto Corridor has increased by a factor of 2.5 for the time period 11 March 2013 to 10 March 2018 in the Kanto Corridor. Estimates of earthquake probabilities in a wider region of the Southern Kanto District by Nanjo et al. (Geophys J Int, doi:10.1093, 2013) indicate that any increase in the probability of earthquakes is insignificant in this larger region. Uchida et al. (Earth Planet Sci Lett 374: 81-91, 2013) conclude that the Philippine Sea plate the extends well north of the northern margin of Tokyo Bay, inconsistent with the Kanto Fragment hypothesis of Toda et al. (Nat Geosci, 1:1-6,2008), which attributes deep earthquakes in this region, which they term the Kanto Corridor, to a broken fragment of the Pacific plate. The results of Uchida and Matsuzawa (J Geophys Res 115:B07309, 2013)support the conclusion that fault creep in southern Kanto may be slowly relaxing the stress increase caused by the Tohoku earthquake without causing more large earthquakes. Stress transfer calculations indicate a large stress transfer to the Off Boso Segment as a result of the 2011 Tohoku earthquake. However, Ozawa et al. (J Geophys Res 117:B07404, 2012) used onshore GPS measurements to infer large post-Tohoku creep on the plate interface in the Off-Boso region, and Uchida and Matsuzawa (ibid.) measured similar large creep off the Boso

  16. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    NASA Astrophysics Data System (ADS)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  17. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  18. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  19. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  20. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  1. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  2. Probabilities of Earthquake Occurrences along the Sumatra-Andaman Subduction Zone

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi

    2017-03-01

    Earthquake activities along the Sumatra-Andaman Subduction Zone (SASZ) were clarified using the derived frequency-magnitude distribution in terms of the (i) most probable maximum magnitudes, (ii) return periods and (iii) probabilities of earthquake occurrences. The northern segment of SASZ, along the western coast of Myanmar to southern Nicobar, was found to be capable of generating an earthquake of magnitude 6.1-6.4 Mw in the next 30-50 years, whilst the southern segment of offshore of the northwestern and western parts of Sumatra (defined as a high hazard region) had a short recurrence interval of 6-12 and 10-30 years for a 6.0 and 7.0 Mw magnitude earthquake, respectively, compared to the other regions. Throughout the area along the SASZ, there are 70- almost 100% probabilities of the earthquake with Mw up to 6.0 might be generated in the next 50 years whilst the northern segment had less than 50% chance of occurrence of a 7.0 Mw earthquake in the next 50 year. Although Rangoon was defined as the lowest hazard among the major city in the vicinity of SASZ, there is 90% chance of a 6.0 Mw earthquake in the next 50 years. Therefore, the effective mitigation plan of seismic hazard should be contributed.

  3. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  4. Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection

    2011-12-01

    Operational earthquake forecasting (OEF) is the dissemination of authoritative information about the time dependence of seismic hazards to help communities prepare for potentially destructive earthquakes. Most previous work on the public utility of OEF has anticipated that forecasts would deliver high probabilities of large earthquakes; i.e., deterministic predictions with low error rates (false alarms and failures-to-predict) would be possible. This expectation has not been realized. An alternative to deterministic prediction is probabilistic forecasting based on empirical statistical models of aftershock triggering and seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains in excess of 100 relative to long-term forecasts. The utility of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing OEF in this sort of "low-probability environment." The need to move more quickly has been underscored by recent seismic crises, such as the 2009 L'Aquila earthquake sequence, in which an anxious public was confused by informal and inaccurate earthquake predictions. After the L'Aquila earthquake, the Italian Department of Civil Protection appointed an International Commission on Earthquake Forecasting (ICEF), which I chaired, to recommend guidelines for OEF utilization. Our report (Ann. Geophys., 54, 4, 2011; doi: 10.4401/ag-5350) concludes: (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and need to convey epistemic uncertainties. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. (c) All operational models should be evaluated

  5. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  6. Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey

    USGS Publications Warehouse

    Parsons, T.

    2004-01-01

    New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.

  7. Simple Physical Model for the Probability of a Subduction- Zone Earthquake Following Slow Slip Events and Earthquakes: Application to the Hikurangi Megathrust, New Zealand

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.

    2018-05-01

    Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.

  8. Predicted Surface Displacements for Scenario Earthquakes in the San Francisco Bay Region

    USGS Publications Warehouse

    Murray-Moraleda, Jessica R.

    2008-01-01

    In the immediate aftermath of a major earthquake, the U.S. Geological Survey (USGS) will be called upon to provide information on the characteristics of the event to emergency responders and the media. One such piece of information is the expected surface displacement due to the earthquake. In conducting probabilistic hazard analyses for the San Francisco Bay Region, the Working Group on California Earthquake Probabilities (WGCEP) identified a series of scenario earthquakes involving the major faults of the region, and these were used in their 2003 report (hereafter referred to as WG03) and the recently released 2008 Uniform California Earthquake Rupture Forecast (UCERF). Here I present a collection of maps depicting the expected surface displacement resulting from those scenario earthquakes. The USGS has conducted frequent Global Positioning System (GPS) surveys throughout northern California for nearly two decades, generating a solid baseline of interseismic measurements. Following an earthquake, temporary GPS deployments at these sites will be important to augment the spatial coverage provided by continuous GPS sites for recording postseismic deformation, as will the acquisition of Interferometric Synthetic Aperture Radar (InSAR) scenes. The information provided in this report allows one to anticipate, for a given event, where the largest displacements are likely to occur. This information is valuable both for assessing the need for further spatial densification of GPS coverage before an event and prioritizing sites to resurvey and InSAR data to acquire in the immediate aftermath of the earthquake. In addition, these maps are envisioned to be a resource for scientists in communicating with emergency responders and members of the press, particularly during the time immediately after a major earthquake before displacements recorded by continuous GPS stations are available.

  9. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  10. A-Priori Rupture Models for Northern California Type-A Faults

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Field, Edward H.

    2008-01-01

    This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide

  11. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  12. Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability.

    PubMed

    Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco

    2018-03-28

    Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.

  13. UCERF3: A new earthquake forecast for California's complex fault system

    USGS Publications Warehouse

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  14. Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities

    USGS Publications Warehouse

    Duross, Christopher; Olig, Susan; Schwartz, David

    2015-01-01

    Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.

  15. Timing of paleoearthquakes on the northern Hayward Fault: preliminary evidence in El Cerrito, California

    USGS Publications Warehouse

    Lienkaemper, J.J.; Schwartz, D.P.; Kelson, K.I.; Lettis, W.R.; Simpson, Gary D.; Southon, J.R.; Wanket, J.A.; Williams, P.L.

    1999-01-01

    The Working Group on California Earthquake Probabilities estimated that the northern Hayward fault had the highest probability (0.28) of producing a M7 Bay Area earthquake in 30 years (WGCEP, 1990). This probability was based, in part, on the assumption that the last large earthquake occurred on this segment in 1836. However, a recent study of historical documents concludes that the 1836 earthquake did not occur on the northern Hayward fault, thereby extending the elapsed time to at least 220 yr ago, the beginning of the written record. The average recurrence interval for a M7 on the northern Hayward is unknown. WGCEP (1990) assumed an interval of 167 years. The 1996 Working Group on Northern California Earthquake Potential estimated ~210 yr, based on extrapolations from southern Hayward paleoseismological studies and a revised estimate of 1868 slip on the southern Hayward fault. To help constrain the timing of paleoearthquakes on the northern Hayward fault for the 1999 Bay Area probability update, we excavated two trenches that cross the fault and a sag pond on the Mira Vista golf course. As the site is on the second fairway, we were limited to less than ten days to document these trenches. Analysis was aided by rapid C-14 dating of more than 90 samples which gave near real-time results with the trenches still open. A combination of upward fault terminations, disrupted strata, and discordant angular relations indicates at least four, and possibly seven or more, surface faulting earthquakes occurred during a 1630-2130 yr interval. Hence, average recurrence time could be <270 yr, but is no more than 710 yr. The most recent earthquake (MRE) occurred after AD 1640. Preliminary analysis of calibrated dates supports the assumption that no large historical (post-1776) earthquakes have ruptured the surface here, but the youngest dates need more corroboration. Analyses of pollen for presence of non-native species help to constrain the time of the MRE. The earthquake

  16. Earthquake probabilities in the San Francisco Bay Region: 2000 to 2030 - a summary of findings

    USGS Publications Warehouse

    ,

    1999-01-01

    The San Francisco Bay region sits astride a dangerous “earthquake machine,” the tectonic boundary between the Pacific and North American Plates. The region has experienced major and destructive earthquakes in 1838, 1868, 1906, and 1989, and future large earthquakes are a certainty. The ability to prepare for large earthquakes is critical to saving lives and reducing damage to property and infrastructure. An increased understanding of the timing, size, location, and effects of these likely earthquakes is a necessary component in any effective program of preparedness. This study reports on the probabilities of occurrence of major earthquakes in the San Francisco Bay region (SFBR) for the three decades 2000 to 2030. The SFBR extends from Healdsberg on the northwest to Salinas on the southeast and encloses the entire metropolitan area, including its most rapidly expanding urban and suburban areas. In this study a “major” earthquake is defined as one with M≥6.7 (where M is moment magnitude). As experience from the Northridge, California (M6.7, 1994) and Kobe, Japan (M6.9, 1995) earthquakes has shown us, earthquakes of this size can have a disastrous impact on the social and economic fabric of densely urbanized areas. To reevaluate the probability of large earthquakes striking the SFBR, the U.S. Geological Survey solicited data, interpretations, and analyses from dozens of scientists representing a wide crosssection of the Earth-science community (Appendix A). The primary approach of this new Working Group (WG99) was to develop a comprehensive, regional model for the long-term occurrence of earthquakes, founded on geologic and geophysical observations and constrained by plate tectonics. The model considers a broad range of observations and their possible interpretations. Using this model, we estimate the rates of occurrence of earthquakes and 30-year earthquake probabilities. Our study considers a range of magnitudes for earthquakes on the major faults in the

  17. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  18. Probability of one or more M ≥7 earthquakes in southern California in 30 years

    USGS Publications Warehouse

    Savage, J.C.

    1994-01-01

    Eight earthquakes of magnitude greater than or equal to seven have occurred in southern California in the past 200 years. If one assumes that such events are the product of a Poisson process, the probability of one or more earthquakes of magnitude seven or larger in southern California within any 30 year interval is 67% ?? 23% (95% confidence interval). Because five of the eight M ??? 7 earthquakes in southern California in the last 200 years occurred away from the San Andreas fault system, the probability of one or more M ??? 7 earthquakes in southern California but not on the San Andreas fault system occurring within 30 years is 52% ?? 27% (95% confidence interval). -Author

  19. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  20. Permanently enhanced dynamic triggering probabilities as evidenced by two M ≥ 7.5 earthquakes

    USGS Publications Warehouse

    Gomberg, Joan S.

    2013-01-01

    The 2012 M7.7 Haida Gwaii earthquake radiated waves that likely dynamically triggered the 2013M7.5 Craig earthquake, setting two precedents. First, the triggered earthquake is the largest dynamically triggered shear failure event documented to date. Second, the events highlight a connection between geologic structure, sedimentary troughs that act as waveguides, and triggering probability. The Haida Gwaii earthquake excited extraordinarily large waves within and beyond the Queen Charlotte Trough, which propagated well into mainland Alaska and likely triggering the Craig earthquake along the way. Previously, focusing and associated dynamic triggering have been attributed to unpredictable source effects. This case suggests that elevated dynamic triggering probabilities may exist along the many structures where sedimentary troughs overlie major faults, such as subduction zones’ accretionary prisms and transform faults’ axial valleys. Although data are sparse, I find no evidence of accelerating seismic activity in the vicinity of the Craig rupture between it and the Haida Gwaii earthquake.

  1. Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation

    USGS Publications Warehouse

    Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.

    2000-01-01

    We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.

  2. Stress transferred by the 1995 Mw = 6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities

    USGS Publications Warehouse

    Toda, S.; Stein, R.S.; Reasenberg, P.A.; Dieterich, J.H.; Yoshida, A.

    1998-01-01

    The Kobe earthquake struck at the edge of the densely populated Osaka-Kyoto corridor in southwest Japan. We investigate how the earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities. We find that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in regions of stress decrease. We quantify this relationship by forming the spatial correlation between the seismicity rate change and the Coulomb stress change. The correlation is significant for stress changes greater than 0.2-1.0 bars (0.02-0.1 MPa), and the nonlinear dependence of seismicity rate change on stress change is compatible with a state- and rate-dependent formulation for earthquake occurrence. We extend this analysis to future mainshocks by resolving the stress changes on major faults within 100 km of Kobe and calculating the change in probability caused by these stress changes. Transient effects of the stress changes are incorporated by the state-dependent constitutive relation, which amplifies the permanent stress changes during the aftershock period. Earthquake probability framed in this manner is highly time-dependent, much more so than is assumed in current practice. Because the probabilities depend on several poorly known parameters of the major faults, we estimate uncertainties of the probabilities by Monte Carlo simulation. This enables us to include uncertainties on the elapsed time since the last earthquake, the repeat time and its variability, and the period of aftershock decay. We estimate that a calculated 3-bar (0.3-MPa) stress increase on the eastern section of the Arima-Takatsuki Tectonic Line (ATTL) near Kyoto causes fivefold increase in the 30-year probability of a subsequent large earthquake near Kyoto; a 2-bar (0.2-MPa) stress decrease on the western section of the ATTL results in a reduction in probability by a factor of 140 to

  3. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks.

    PubMed

    Zhuang, Jiancang; Ogata, Yosihiko

    2006-04-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  4. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  5. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    USGS Publications Warehouse

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  6. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic risk to earthquakes are increasing steadily as urbanization and development occupy more areas that a prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The earthquake scenarios are intended as a basic input for developing detailed earthquake damage scenarios for the cities and can be used in earthquake-safe town and infrastructure planning. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia area that is the most populated (the population is of more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=9-10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK). Almost a century later (95 years) an earthquake of moment magnitude 5.6 (I0=7-8 MSK) hit the city of Sofia, on May 22nd, 2012. In the present study as a deterministic scenario event is considered a damaging earthquake with higher probability of occurrence that could affect the city with intensity less than or equal to VIII

  7. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  8. Earthquake recurrence models and occurrence probabilities of strong earthquakes in the North Aegean Trough (Greece)

    NASA Astrophysics Data System (ADS)

    Christos, Kourouklas; Eleftheria, Papadimitriou; George, Tsaklidis; Vassilios, Karakostas

    2018-06-01

    The determination of strong earthquakes' recurrence time above a predefined magnitude, associated with specific fault segments, is an important component of seismic hazard assessment. The occurrence of these earthquakes is neither periodic nor completely random but often clustered in time. This fact in connection with their limited number, due to shortage of the available catalogs, inhibits a deterministic approach for recurrence time calculation, and for this reason, application of stochastic processes is required. In this study, recurrence time determination in the area of North Aegean Trough (NAT) is developed by the application of time-dependent stochastic models, introducing an elastic rebound motivated concept for individual fault segments located in the study area. For this purpose, all the available information on strong earthquakes (historical and instrumental) with M w ≥ 6.5 is compiled and examined for magnitude completeness. Two possible starting dates of the catalog are assumed with the same magnitude threshold, M w ≥ 6.5 and divided into five data sets, according to a new segmentation model for the study area. Three Brownian Passage Time (BPT) models with different levels of aperiodicity are applied and evaluated with the Anderson-Darling test for each segment in both catalog data where possible. The preferable models are then used in order to estimate the occurrence probabilities of M w ≥ 6.5 shocks on each segment of NAT for the next 10, 20, and 30 years since 01/01/2016. Uncertainties in probability calculations are also estimated using a Monte Carlo procedure. It must be mentioned that the provided results should be treated carefully because of their dependence to the initial assumptions. Such assumptions exhibit large variability and alternative means of these may return different final results.

  9. Development of damage probability matrices based on Greek earthquake damage data

    NASA Astrophysics Data System (ADS)

    Eleftheriadou, Anastasia K.; Karabinis, Athanasios I.

    2011-03-01

    A comprehensive study is presented for empirical seismic vulnerability assessment of typical structural types, representative of the building stock of Southern Europe, based on a large set of damage statistics. The observational database was obtained from post-earthquake surveys carried out in the area struck by the September 7, 1999 Athens earthquake. After analysis of the collected observational data, a unified damage database has been created which comprises 180,945 damaged buildings from/after the near-field area of the earthquake. The damaged buildings are classified in specific structural types, according to the materials, seismic codes and construction techniques in Southern Europe. The seismic demand is described in terms of both the regional macroseismic intensity and the ratio α g/ a o, where α g is the maximum peak ground acceleration (PGA) of the earthquake event and a o is the unique value PGA that characterizes each municipality shown on the Greek hazard map. The relative and cumulative frequencies of the different damage states for each structural type and each intensity level are computed in terms of damage ratio. Damage probability matrices (DPMs) and vulnerability curves are obtained for specific structural types. A comparison analysis is fulfilled between the produced and the existing vulnerability models.

  10. Media exposure related to the 2008 Sichuan Earthquake predicted probable PTSD among Chinese adolescents in Kunming, China: A longitudinal study.

    PubMed

    Yeung, Nelson C Y; Lau, Joseph T F; Yu, Nancy Xiaonan; Zhang, Jianping; Xu, Zhening; Choi, Kai Chow; Zhang, Qi; Mak, Winnie W S; Lui, Wacy W S

    2018-03-01

    This study examined the prevalence and the psychosocial predictors of probable PTSD among Chinese adolescents in Kunming (approximately 444 miles from the epicenter), China, who were indirectly exposed to the Sichuan Earthquake in 2008. Using a longitudinal study design, primary and secondary school students (N = 3577) in Kunming completed questionnaires at baseline (June 2008) and 6 months afterward (December 2008) in classroom settings. Participants' exposure to earthquake-related imagery and content, perceptions and emotional reactions related to the earthquake, and posttraumatic stress symptoms were measured. Univariate and forward stepwise multivariable logistic regression models were fit to identify significant predictors of probable PTSD at the 6-month follow-up. Prevalences of probable PTSD (with a Children's Revised Impact of Event Scale score ≥30) among the participants at baseline and 6-month follow-up were 16.9% and 11.1% respectively. In the multivariable analysis, those who were frequently exposed to distressful imagery had experienced at least two types of negative life events, perceived that teachers were distressed due to the earthquake, believed that the earthquake resulted from damages to the ecosystem, and felt apprehensive and emotionally disturbed due to the earthquake reported a higher risk of probable PTSD at 6-month follow-up (all ps < .05). Exposure to distressful media images, emotional responses, and disaster-related perceptions at baseline were found to be predictive of probable PTSD several months after indirect exposure to the event. Parents, teachers, and the mass media should be aware of the negative impacts of disaster-related media exposure on adolescents' psychological health. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. The 2011 M = 9.0 Tohoku oki earthquake more than doubled the probability of large shocks beneath Tokyo

    USGS Publications Warehouse

    Toda, Shinji; Stein, Ross S.

    2013-01-01

    1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 earthquakes in the past 400 years. Immediately after the Tohoku earthquake, the seismicity rate in the corridor jumped 10-fold, while the rate of normal focal mechanisms dropped in half. The seismicity rate decayed for 6–12 months, after which it steadied at three times the pre-Tohoku rate. The seismicity rate jump and decay to a new rate, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a rate/state Coulomb model, which we use to forecast the time-dependent probability of large earthquakes in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku earthquake not struck

  12. Holocene paleoseismicity, temporal clustering, and probabilities of future large (M > 7) earthquakes on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    McCalpin, J.P.; Nishenko, S.P.

    1996-01-01

    The chronology of M>7 paleoearthquakes on the central five segments of the Wasatch fault zone (WFZ) is one of the best dated in the world and contains 16 earthquakes in the past 5600 years with an average repeat time of 350 years. Repeat times for individual segments vary by a factor of 2, and range from about 1200 to 2600 years. Four of the central five segments ruptured between ??? 620??30 and 1230??60 calendar years B.P. The remaining segment (Brigham City segment) has not ruptured in the past 2120??100 years. Comparison of the WFZ space-time diagram of paleoearthquakes with synthetic paleoseismic histories indicates that the observed temporal clusters and gaps have about an equal probability (depending on model assumptions) of reflecting random coincidence as opposed to intersegment contagion. Regional seismicity suggests that for exposure times of 50 and 100 years, the probability for an earthquake of M>7 anywhere within the Wasatch Front region, based on a Poisson model, is 0.16 and 0.30, respectively. A fault-specific WFZ model predicts 50 and 100 year probabilities for a M>7 earthquake on the WFZ itself, based on a Poisson model, as 0.13 and 0.25, respectively. In contrast, segment-specific earthquake probabilities that assume quasi-periodic recurrence behavior on the Weber, Provo, and Nephi segments are less (0.01-0.07 in 100 years) than the regional or fault-specific estimates (0.25-0.30 in 100 years), due to the short elapsed times compared to average recurrence intervals on those segments. The Brigham City and Salt Lake City segments, however, have time-dependent probabilities that approach or exceed the regional and fault specific probabilities. For the Salt Lake City segment, these elevated probabilities are due to the elapsed time being approximately equal to the average late Holocene recurrence time. For the Brigham City segment, the elapsed time is significantly longer than the segment-specific late Holocene recurrence time.

  13. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  14. Paleoseismic event dating and the conditional probability of large earthquakes on the southern San Andreas fault, California

    USGS Publications Warehouse

    Biasi, G.P.; Weldon, R.J.; Fumal, T.E.; Seitz, G.G.

    2002-01-01

    We introduce a quantitative approach to paleoearthquake dating and apply it to paleoseismic data from the Wrightwood and Pallett Creek sites on the southern San Andreas fault. We illustrate how stratigraphic ordering, sedimentological, and historical data can be used quantitatively in the process of estimating earthquake ages. Calibrated radiocarbon age distributions are used directly from layer dating through recurrence intervals and recurrence probability estimation. The method does not eliminate subjective judgements in event dating, but it does provide a means of systematically and objectively approaching the dating process. Date distributions for the most recent 14 events at Wrightwood are based on sample and contextual evidence in Fumal et al. (2002) and site context and slip history in Weldon et al. (2002). Pallett Creek event and dating descriptions are from published sources. For the five most recent events at Wrightwood, our results are consistent with previously published estimates, with generally comparable or narrower uncertainties. For Pallett Creek, our earthquake date estimates generally overlap with previous results but typically have broader uncertainties. Some event date estimates are very sensitive to details of data interpretation. The historical earthquake in 1857 ruptured the ground at both sites but is not constrained by radiocarbon data. Radiocarbon ages, peat accumulation rates, and historical constraints at Pallett Creek for event X yield a date estimate in the earliest 1800s and preclude a date in the late 1600s. This event is almost certainly the historical 1812 earthquake, as previously concluded by Sieh et al. (1989). This earthquake also produced ground deformation at Wrightwood. All events at Pallett Creek, except for event T, about A.D. 1360, and possibly event I, about A.D. 960, have corresponding events at Wrightwood with some overlap in age ranges. Event T falls during a period of low sedimentation at Wrightwood when conditions

  15. Maximum Magnitude and Probabilities of Induced Earthquakes in California Geothermal Fields: Applications for a Science-Based Decision Framework

    NASA Astrophysics Data System (ADS)

    Weiser, Deborah Anne

    Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.

  16. Response of the San Andreas fault to the 1983 Coalinga-Nuñez earthquakes: an application of interaction-based probabilities for Parkfield

    USGS Publications Warehouse

    Toda, Shinji; Stein, Ross S.

    2002-01-01

    The Parkfield-Cholame section of the San Andreas fault, site of an unfulfilled earthquake forecast in 1985, is the best monitored section of the world's most closely watched fault. In 1983, the M = 6.5 Coalinga and M = 6.0 Nuñez events struck 25 km northeast of Parkfield. Seismicity rates climbed for 18 months along the creeping section of the San Andreas north of Parkfield and dropped for 6 years along the locked section to the south. Right-lateral creep also slowed or reversed from Parkfield south. Here we calculate that the Coalinga sequence increased the shear and Coulomb stress on the creeping section, causing the rate of small shocks to rise until the added stress was shed by additional slip. However, the 1983 events decreased the shear and Coulomb stress on the Parkfield segment, causing surface creep and seismicity rates to drop. We use these observations to cast the likelihood of a Parkfield earthquake into an interaction-based probability, which includes both the renewal of stress following the 1966 Parkfield earthquake and the stress transfer from the 1983 Coalinga events. We calculate that the 1983 shocks dropped the 10-year probability of a M ∼ 6 Parkfield earthquake by 22% (from 54 ± 22% to 42 ± 23%) and that the probability did not recover until about 1991, when seismicity and creep resumed. Our analysis may thus explain why the Parkfield earthquake did not strike in the 1980s, but not why it was absent in the 1990s. We calculate a 58 ± 17% probability of a M ∼ 6 Parkfield earthquake during 2001–2011.

  17. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    USGS Publications Warehouse

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  18. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  19. Trimming the UCERF2 hazard logic tree

    USGS Publications Warehouse

    Porter, Keith A.; Field, Edward H.; Milner, Kevin

    2012-01-01

    The Uniform California Earthquake Rupture Forecast 2 (UCERF2) is a fully time‐dependent earthquake rupture forecast developed with sponsorship of the California Earthquake Authority (Working Group on California Earthquake Probabilities [WGCEP], 2007; Field et al., 2009). UCERF2 contains 480 logic‐tree branches reflecting choices among nine modeling uncertainties in the earthquake rate model shown in Figure 1. For seismic hazard analysis, it is also necessary to choose a ground‐motion‐prediction equation (GMPE) and set its parameters. Choosing among four next‐generation attenuation (NGA) relationships results in a total of 1920 hazard calculations per site. The present work is motivated by a desire to reduce the computational effort involved in a hazard analysis without understating uncertainty. We set out to assess which branching points of the UCERF2 logic tree contribute most to overall uncertainty, and which might be safely ignored (set to only one branch) without significantly biasing results or affecting some useful measure of uncertainty. The trimmed logic tree will have all of the original choices from the branching points that contribute significantly to uncertainty, but only one arbitrarily selected choice from the branching points that do not.

  20. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  1. Prevalence and Predictors of Somatic Symptoms among Child and Adolescents with Probable Posttraumatic Stress Disorder: A Cross-Sectional Study Conducted in 21 Primary and Secondary Schools after an Earthquake.

    PubMed

    Zhang, Ye; Zhang, Jun; Zhu, Shenyue; Du, Changhui; Zhang, Wei

    2015-01-01

    To explore the prevalence rates and predictors of somatic symptoms among child and adolescent survivors with probable posttraumatic stress disorder (PTSD) after an earthquake. A total of 3053 students from 21 primary and secondary schools in Baoxing County were administered the Patient Health Questionnaire-13 (PHQ-13), a short version of PHQ-15 without the two items about sexuality and menstruation, the Children's Revised Impact of Event Scale (CRIES), and the self-made Earthquake-Related Experience Questionnaire 3 months after the Lushan earthquake. Among child and adolescent survivors, the prevalence rates of all somatic symptoms were higher in the probable PTSD group compared with the controls. The most frequent somatic symptoms were trouble sleeping (83.2%), feeling tired or having low energy (74.4%), stomach pain (63.2%), dizziness (58.1%), and headache (57.7%) in the probable PTSD group. Older age, having lost family members, having witnessed someone get seriously injured, and having witnessed someone get buried were predictors for somatic symptoms among child and adolescent survivors with probable PTSD. Somatic symptoms among child and adolescent earthquake survivors with probable PTSD in schools were common, and predictors of these somatic symptoms were identified. These findings may help those providing psychological health programs to find the child and adolescent students with probable PTSD who are at high risk of somatic symptoms in schools after an earthquake in China.

  2. Scenario earthquake hazards for the Long Valley Caldera-Mono Lake area, east-central California (ver. 2.0, January 2018)

    USGS Publications Warehouse

    Chen, Rui; Branum, David M.; Wills, Chris J.; Hill, David P.

    2014-06-30

    As part of the U.S. Geological Survey’s (USGS) multi-hazards project in the Long Valley Caldera-Mono Lake area, the California Geological Survey (CGS) developed several earthquake scenarios and evaluated potential seismic hazards, including ground shaking, surface fault rupture, liquefaction, and landslide hazards associated with these earthquake scenarios. The results of these analyses can be useful in estimating the extent of potential damage and economic losses because of potential earthquakes and also for preparing emergency response plans.The Long Valley Caldera-Mono Lake area has numerous active faults. Five of these faults or fault zones are considered capable of producing magnitude ≥6.7 earthquakes according to the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2) developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP) and the USGS National Seismic Hazard Mapping Program. These five faults are the Fish Slough, Hartley Springs, Hilton Creek, Mono Lake, and Round Valley Faults. CGS developed earthquake scenarios for these five faults in the study area and for the White Mountains Fault Zone to the east of the study area.In this report, an earthquake scenario is intended to depict the potential consequences of significant earthquakes. A scenario earthquake is not necessarily the largest or most damaging earthquake possible on a recognized fault. Rather it is both large enough and likely enough that emergency planners should consider it in regional emergency response plans. In particular, the ground motion predicted for a given scenario earthquake does not represent a full probabilistic hazard assessment, and thus it does not provide the basis for hazard zoning and earthquake-resistant building design.Earthquake scenarios presented here are based on fault geometry and activity data developed by the WGCEP, and are consistent with the 2008 Update of the United States National Seismic Hazard Maps (NSHM). Alternatives

  3. “All Models Are Wrong, but Some Are Useful”

    USGS Publications Warehouse

    Field, Edward H.

    2015-01-01

    Building a new model, especially one used for policy purposes, takes considerable time, effort, and resources. In justifying such expenditures, one inevitably spends a lot of time denigrating previous models. For example, in pitching the third Uniform California Earthquake Rupture Forecast (UCERF3) (http://www.WGCEP.org/UCERF3), criticisms of the previous model included fault‐segmentation assumptions and the lack of multifault ruptures. In the context of including spatiotemporal clustering for operational earthquake forecasting (e.g., Jordan et al., 2011), another criticism has been that previous candidate models not only ignore elastic rebound but also produce results that are antithetical to that theory. For instance, the short‐term earthquake probabilities model (Gerstenberger et al., 2005), which provided California aftershock hazard maps at the U.S. Geological Survey web site between 2005 and 2010, implies that the time of highest likelihood for any rupture will be the moment after it occurs, even for a big one on the San Andreas fault. Furthermore, Monte Carlo simulations imply that excluding elastic rebound in such models also produces unrealistic triggering statistics (Field, 2012).

  4. Earthquake forecast for the Wasatch Front region of the Intermountain West

    USGS Publications Warehouse

    DuRoss, Christopher B.

    2016-04-18

    The Working Group on Utah Earthquake Probabilities has assessed the probability of large earthquakes in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater earthquakes and a 57 percent probability of one or more magnitude 6.0 or greater earthquakes in the region in the next 50 years. These results highlight the threat of large earthquakes in the region.

  5. Introducing ShakeMap to potential users in Puerto Rico using scenarios of damaging historical and probable earthquakes

    NASA Astrophysics Data System (ADS)

    Huerfano, V. A.; Cua, G.; von Hillebrandt, C.; Saffar, A.

    2007-12-01

    The island of Puerto Rico has a long history of damaging earthquakes. Major earthquakes from off-shore sources have affected Puerto Rico in 1520, 1615, 1670, 1751, 1787, 1867, and 1918 (Mueller et al, 2003; PRSN Catalogue). Recent trenching has also yielded evidence of possible M7.0 events inland (Prentice, 2000). The high seismic hazard, large population, high tsunami potential and relatively poor construction practice can result in a potentially devastating combination. Efficient emergency response in event of a large earthquake will be crucial to minimizing the loss of life and disruption of lifeline systems in Puerto Rico. The ShakeMap system (Wald et al, 2004) developed by the USGS to rapidly display and disseminate information about the geographical distribution of ground shaking (and hence potential damage) following a large earthquake has proven to be a vital tool for post earthquake emergency response efforts, and is being adopted/emulated in various seismically active regions worldwide. Implementing a robust ShakeMap system is among the top priorities of the Puerto Rico Seismic Network. However, the ultimate effectiveness of ShakeMap in post- earthquake response depends not only on its rapid availability, but also on the effective use of the information it provides. We developed ShakeMap scenarios of a suite of damaging historical and probable earthquakes that severely impact San Juan, Ponce, and Mayagüez, the 3 largest cities in Puerto Rico. Earthquake source parameters were obtained from McCann and Mercado (1998); and Huérfano (2004). For historical earthquakes that generated tsunamis, tsunami inundation maps were generated using the TIME method (Shuto, 1991). The ShakeMap ground shaking maps were presented to local and regional governmental and emergency response agencies at the 2007 Annual conference of the Puerto Rico Emergency Management and Disaster Administration in San Juan, PR, and at numerous other emergency management talks and training

  6. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  7. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  8. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  9. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  10. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  11. Probability of a great earthquake to recur in the Tokai district, Japan: reevaluation based on newly-developed paleoseismology, plate tectonics, tsunami study, micro-seismicity and geodetic measurements

    NASA Astrophysics Data System (ADS)

    Rikitake, T.

    1999-03-01

    In light of newly-acquired geophysical information about earthquake generation in the Tokai area, Central Japan, where occurrence of a great earthquake of magnitude 8 or so has recently been feared, probabilities of earthquake occurrence in the near future are reevaluated. Much of the data used for evaluation here relies on recently-developed paleoseismology, tsunami study and GPS geodesy.The new Weibull distribution analysis of recurrence tendency of great earthquakes in the Tokai-Nankai zone indicates that the mean return period of great earthquakes there is estimated as 109 yr with a standard deviation amounting to 33 yr. These values do not differ much from those of previous studies (Rikitake, 1976, 1986; Utsu, 1984).Taking the newly-determined velocities of the motion of Philippine Sea plate at various portions of the Tokai-Nankai zone into account, the ultimate displacements to rupture at the plate boundary are obtained. A Weibull distribution analysis results in the mean ultimate displacement amounting to 4.70 m with a standard deviation estimated as 0.86 m. A return period amounting to 117 yr is obtained at the Suruga Bay portion by dividing the mean ultimate displacement by the relative plate velocity.With the aid of the fault models as determined from the tsunami studies, the increases in the cumulative seismic slips associated with the great earthquakes are examined at various portions of the zone. It appears that a slip-predictable model can better be applied to the occurrence mode of great earthquakes in the zone than a time-predictable model. The crustal strain accumulating over the Tokai area as estimated from the newly-developed geodetic work including the GPS observations is compared to the ultimate strain presumed by the above two models.The probabilities for a great earthquake to recur in the Tokai district are then estimated with the aid of the Weibull analysis parameters obtained for the four cases discussed in the above. All the probabilities

  12. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so

  13. Bayesian probabilities for Mw 9.0+ earthquakes in the Aleutian Islands from a regionally scaled global rate

    NASA Astrophysics Data System (ADS)

    Butler, Rhett; Frazer, L. Neil; Templeton, William J.

    2016-05-01

    We use the global rate of Mw ≥ 9.0 earthquakes, and standard Bayesian procedures, to estimate the probability of such mega events in the Aleutian Islands, where they pose a significant risk to Hawaii. We find that the probability of such an earthquake along the Aleutians island arc is 6.5% to 12% over the next 50 years (50% credibility interval) and that the annualized risk to Hawai'i is about $30 M. Our method (the regionally scaled global rate method or RSGR) is to scale the global rate of Mw 9.0+ events in proportion to the fraction of global subduction (units of area per year) that takes place in the Aleutians. The RSGR method assumes that Mw 9.0+ events are a Poisson process with a rate that is both globally and regionally stationary on the time scale of centuries, and it follows the principle of Burbidge et al. (2008) who used the product of fault length and convergence rate, i.e., the area being subducted per annum, to scale the Poisson rate for the GSS to sections of the Indonesian subduction zone. Before applying RSGR to the Aleutians, we first apply it to five other regions of the global subduction system where its rate predictions can be compared with those from paleotsunami, paleoseismic, and geoarcheology data. To obtain regional rates from paleodata, we give a closed-form solution for the probability density function of the Poisson rate when event count and observation time are both uncertain.

  14. Earthquake outlook for the San Francisco Bay region 2014–2043

    USGS Publications Warehouse

    Aagaard, Brad T.; Blair, James Luke; Boatwright, John; Garcia, Susan H.; Harris, Ruth A.; Michael, Andrew J.; Schwartz, David P.; DiLeo, Jeanne S.; Jacques, Kate; Donlin, Carolyn

    2016-06-13

    Using information from recent earthquakes, improved mapping of active faults, and a new model for estimating earthquake probabilities, the 2014 Working Group on California Earthquake Probabilities updated the 30-year earthquake forecast for California. They concluded that there is a 72 percent probability (or likelihood) of at least one earthquake of magnitude 6.7 or greater striking somewhere in the San Francisco Bay region before 2043. Earthquakes this large are capable of causing widespread damage; therefore, communities in the region should take simple steps to help reduce injuries, damage, and disruption, as well as accelerate recovery from these earthquakes.

  15. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  16. Earthquake induced liquefaction hazard, probability and risk assessment in the city of Kolkata, India: its historical perspective and deterministic scenario

    NASA Astrophysics Data System (ADS)

    Nath, Sankar Kumar; Srivastava, Nishtha; Ghatak, Chitralekha; Adhikari, Manik Das; Ghosh, Ambarish; Sinha Ray, S. P.

    2018-01-01

    Liquefaction-induced ground failure is one amongst the leading causes of infrastructure damage due to the impact of large earthquakes in unconsolidated, non-cohesive, water saturated alluvial terrains. The city of Kolkata is located on the potentially liquefiable alluvial fan deposits of Ganga-Bramhaputra-Meghna Delta system with subsurface litho-stratigraphic sequence comprising of varying percentages of clay, cohesionless silt, sand, and gravel interbedded with decomposed wood and peat. Additionally, the region has moderately shallow groundwater condition especially in the post-monsoon seasons. In view of burgeoning population, there had been unplanned expansion of settlements in the hazardous geological, geomorphological, and hydrological conditions exposing the city to severe liquefaction hazard. The 1897 Shillong and 1934 Bihar-Nepal earthquakes both of M w 8.1 reportedly induced Modified Mercalli Intensity of IV-V and VI-VII respectively in the city reportedly triggering widespread to sporadic liquefaction condition with surface manifestation of sand boils, lateral spreading, ground subsidence, etc., thus posing a strong case for liquefaction potential analysis in the terrain. With the motivation of assessing seismic hazard, vulnerability, and risk of the city of Kolkata through a consorted federal funding stipulated for all the metros and upstart urban centers in India located in BIS seismic zones III, IV, and V with population more than one million, an attempt has been made here to understand the liquefaction susceptibility condition of Kolkata under the impact of earthquake loading employing modern multivariate techniques and also to predict deterministic liquefaction scenario of the city in the event of a probabilistic seismic hazard condition with 10% probability of exceedance in 50 years and a return period of 475 years. We conducted in-depth geophysical and geotechnical investigations in the city encompassing 435 km2 area. The stochastically

  17. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  18. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  19. Associating an ionospheric parameter with major earthquake occurrence throughout the world

    NASA Astrophysics Data System (ADS)

    Ghosh, D.; Midya, S. K.

    2014-02-01

    With time, ionospheric variation analysis is gaining over lithospheric monitoring in serving precursors for earthquake forecast. The current paper highlights the association of major (Ms ≥ 6.0) and medium (4.0 ≤ Ms < 6.0) earthquake occurrences throughout the world in different ranges of the Ionospheric Earthquake Parameter (IEP) where `Ms' is earthquake magnitude on the Richter scale. From statistical and graphical analyses, it is concluded that the probability of earthquake occurrence is maximum when the defined parameter lies within the range of 0-75 (lower range). In the higher ranges, earthquake occurrence probability gradually decreases. A probable explanation is also suggested.

  20. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  1. Earthquakes and Schools

    ERIC Educational Resources Information Center

    National Clearinghouse for Educational Facilities, 2008

    2008-01-01

    Earthquakes are low-probability, high-consequence events. Though they may occur only once in the life of a school, they can have devastating, irreversible consequences. Moderate earthquakes can cause serious damage to building contents and non-structural building systems, serious injury to students and staff, and disruption of building operations.…

  2. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  3. Subduction zone earthquake probably triggered submarine hydrocarbon seepage offshore Pakistan

    NASA Astrophysics Data System (ADS)

    Fischer, David; José M., Mogollón; Michael, Strasser; Thomas, Pape; Gerhard, Bohrmann; Noemi, Fekete; Volkhard, Spiess; Sabine, Kasten

    2014-05-01

    Seepage of methane-dominated hydrocarbons is heterogeneous in space and time, and trigger mechanisms of episodic seep events are not well constrained. It is generally found that free hydrocarbon gas entering the local gas hydrate stability field in marine sediments is sequestered in gas hydrates. In this manner, gas hydrates can act as a buffer for carbon transport from the sediment into the ocean. However, the efficiency of gas hydrate-bearing sediments for retaining hydrocarbons may be corrupted: Hypothesized mechanisms include critical gas/fluid pressures beneath gas hydrate-bearing sediments, implying that these are susceptible to mechanical failure and subsequent gas release. Although gas hydrates often occur in seismically active regions, e.g., subduction zones, the role of earthquakes as potential triggers of hydrocarbon transport through gas hydrate-bearing sediments has hardly been explored. Based on a recent publication (Fischer et al., 2013), we present geochemical and transport/reaction-modelling data suggesting a substantial increase in upward gas flux and hydrocarbon emission into the water column following a major earthquake that occurred near the study sites in 1945. Calculating the formation time of authigenic barite enrichments identified in two sediment cores obtained from an anticlinal structure called "Nascent Ridge", we find they formed 38-91 years before sampling, which corresponds well to the time elapsed since the earthquake (62 years). Furthermore, applying a numerical model, we show that the local sulfate/methane transition zone shifted upward by several meters due to the increased methane flux and simulated sulfate profiles very closely match measured ones in a comparable time frame of 50-70 years. We thus propose a causal relation between the earthquake and the amplified gas flux and present reflection seismic data supporting our hypothesis that co-seismic ground shaking induced mechanical fracturing of gas hydrate-bearing sediments

  4. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  5. Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference

    USGS Publications Warehouse

    Wesson, R.L.; Bakun, W.H.; Perkins, D.M.

    2003-01-01

    Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.

  6. Nowcasting Earthquakes: A Comparison of Induced Earthquakes in Oklahoma and at the Geysers, California

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.

    2018-01-01

    Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.

  7. The relationship between earthquake exposure and posttraumatic stress disorder in 2013 Lushan earthquake

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Lu, Yi

    2018-01-01

    The objective of this study is to explore the relationship between earthquake exposure and the incidence of PTSD. A stratification random sample survey was conducted to collect data in the Longmenshan thrust fault after Lushan earthquake three years. We used the Children's Revised Impact of Event Scale (CRIES-13) and the Earthquake Experience Scale. Subjects in this study included 3944 school student survivors in local eleven schools. The prevalence of probable PTSD is relatively higher, when the people was trapped in the earthquake, was injured in the earthquake or have relatives who died in the earthquake. It concluded that researchers need to pay more attention to the children and adolescents. The government should pay more attention to these people and provide more economic support.

  8. Crustal earthquake triggering by pre-historic great earthquakes on subduction zone thrusts

    USGS Publications Warehouse

    Sherrod, Brian; Gomberg, Joan

    2014-01-01

    Triggering of earthquakes on upper plate faults during and shortly after recent great (M>8.0) subduction thrust earthquakes raises concerns about earthquake triggering following Cascadia subduction zone earthquakes. Of particular regard to Cascadia was the previously noted, but only qualitatively identified, clustering of M>~6.5 crustal earthquakes in the Puget Sound region between about 1200–900 cal yr B.P. and the possibility that this was triggered by a great Cascadia thrust subduction thrust earthquake, and therefore portends future such clusters. We confirm quantitatively the extraordinary nature of the Puget Sound region crustal earthquake clustering between 1200–900 cal yr B.P., at least over the last 16,000. We conclude that this cluster was not triggered by the penultimate, and possibly full-margin, great Cascadia subduction thrust earthquake. However, we also show that the paleoseismic record for Cascadia is consistent with conclusions of our companion study of the global modern record outside Cascadia, that M>8.6 subduction thrust events have a high probability of triggering at least one or more M>~6.5 crustal earthquakes.

  9. Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruptions, magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.

    2012-01-01

    Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.

  10. Critical behavior in earthquake energy dissipation

    NASA Astrophysics Data System (ADS)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  11. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  12. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  13. Role of stress triggering in earthquake migration on the North Anatolian fault

    USGS Publications Warehouse

    Stein, R.S.; Dieterich, J.H.; Barka, A.A.

    1996-01-01

    Ten M???6.7 earthquakes ruptured 1,000 km of the North Anatolian fault (Turkey) during 1939-92, providing an unsurpassed opportunity to study how one large shock sets up the next. Calculations of the change in Coulomb failure stress reveal that 9 out of 10 ruptures were brought closer to failure by the preceding shocks, typically by 5 bars, equivalent to 20 years of secular stressing. We translate the calculated stress changes into earthquake probabilities using an earthquake-nucleation constitutive relation, which includes both permanent and transient stress effects. For the typical 10-year period between triggering and subsequent rupturing shocks in the Anatolia sequence, the stress changes yield an average three-fold gain in the ensuing earthquake probability. Stress is now calculated to be high at several isolated sites along the fault. During the next 30 years, we estimate a 15% probability of a M???6.7 earthquake east of the major eastern center of Erzincan, and a 12% probability for a large event south of the major western port city of Izmit. Such stress-based probability calculations may thus be useful to assess and update earthquake hazards elsewhere. ?? 1997 Elsevier Science Ltd.

  14. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and

  15. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  16. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-02

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  17. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  18. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of

  19. Prospective testing of Coulomb short-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  20. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  1. Larger earthquakes recur more periodically: New insights in the megathrust earthquake cycle from lacustrine turbidite records in south-central Chile

    NASA Astrophysics Data System (ADS)

    Moernaut, J.; Van Daele, M.; Fontijn, K.; Heirman, K.; Kempf, P.; Pino, M.; Valdebenito, G.; Urrutia, R.; Strasser, M.; De Batist, M.

    2018-01-01

    Historical and paleoseismic records in south-central Chile indicate that giant earthquakes on the subduction megathrust - such as in AD1960 (Mw 9.5) - reoccur on average every ∼300 yr. Based on geodetic calculations of the interseismic moment accumulation since AD1960, it was postulated that the area already has the potential for a Mw 8 earthquake. However, to estimate the probability of such a great earthquake to take place in the short term, one needs to frame this hypothesis within the long-term recurrence pattern of megathrust earthquakes in south-central Chile. Here we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800 yr. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII1/2 vs. ≥VI1/2) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII1/2) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI1/2 (139 ± 69yr). Moreover, distribution fitting and the coefficient of variation (CoV) of inter-event times indicate that the stronger earthquakes recur in a more periodic way (CoV: 0.32 vs. 0.5). Regional correlation of our multi-threshold shaking records with coastal paleoseismic data of complementary nature (tsunami, coseismic subsidence) suggests that the intensity ≥VII1/2 events repeatedly ruptured the same part of the megathrust over a distance of at least ∼300 km and can be assigned to Mw ≥ 8.6. We hypothesize that a zone of high plate locking - identified by geodetic studies and large slip in AD 1960 - acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. Our paleo-records indicate that Poissonian recurrence models are inadequate to describe large megathrust earthquake recurrence in south-central Chile. Moreover, they show an enhanced

  2. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    NASA Astrophysics Data System (ADS)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  3. Probabilistic Tsunami Hazard Assessment along Nankai Trough (1) An assessment based on the information of the forthcoming earthquake that Earthquake Research Committee(2013) evaluated

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2015-12-01

    The Earthquake Research Committee(ERC)/HERP, Government of Japan (2013) revised their long-term evaluation of the forthcoming large earthquake along the Nankai Trough; the next earthquake is estimated M8 to 9 class, and the probability (P30) that the next earthquake will occur within the next 30 years (from Jan. 1, 2013) is 60% to 70%. In this study, we assess tsunami hazards (maximum coastal tsunami heights) in the near future, in terms of a probabilistic approach, from the next earthquake along Nankai Trough, on the basis of ERC(2013)'s report. The probabilistic tsunami hazard assessment that we applied is as follows; (1) Characterized earthquake fault models (CEFMs) are constructed on each of the 15 hypothetical source areas (HSA) that ERC(2013) showed. The characterization rule follows Toyama et al.(2015, JpGU). As results, we obtained total of 1441 CEFMs. (2) We calculate tsunamis due to CEFMs by solving nonlinear, finite-amplitude, long-wave equations with advection and bottom friction terms by finite-difference method. Run-up computation on land is included. (3) A time predictable model predicts the recurrent interval of the present seismic cycle is T=88.2 years (ERC,2013). We fix P30 = 67% by applying the renewal process based on BPT distribution with T and alpha=0.24 as its aperiodicity. (4) We divide the probability P30 into P30(i) for i-th subgroup consisting of the earthquakes occurring in each of 15 HSA by following a probability re-distribution concept (ERC,2014). Then each earthquake (CEFM) in i-th subgroup is assigned a probability P30(i)/N where N is the number of CEFMs in each sub-group. Note that such re-distribution concept of the probability is nothing but tentative because the present seismology cannot give deep knowledge enough to do it. Epistemic logic-tree approach may be required in future. (5) We synthesize a number of tsunami hazard curves at every evaluation points on coasts by integrating the information about 30 years occurrence

  4. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  5. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  6. Quantifying Earthquake Collapse Risk of Tall Steel Braced Frame Buildings Using Rupture-to-Rafters Simulations

    NASA Astrophysics Data System (ADS)

    Mourhatch, Ramses

    This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis. As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California. Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2s-2.0s) empirical Green's function synthetics on top of long-period (> 2.0s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with

  7. A Poisson method application to the assessment of the earthquake hazard in the North Anatolian Fault Zone, Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Türker, Tuğba, E-mail: tturker@ktu.edu.tr; Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr

    North Anatolian Fault (NAF) is one from the most important strike-slip fault zones in the world and located among regions in the highest seismic activity. The NAFZ observed very large earthquakes from the past to present. The aim of this study; the important parameters of Gutenberg-Richter relationship (a and b values) estimated and this parameters taking into account, earthquakes were examined in the between years 1900-2015 for 10 different seismic source regions in the NAFZ. After that estimated occurrence probabilities and return periods of occurring earthquakes in fault zone in the next years, and is being assessed with Poisson methodmore » the earthquake hazard of the NAFZ. The Region 2 were observed the largest earthquakes for the only historical period and hasn’t been observed large earthquake for the instrumental period in this region. Two historical earthquakes (1766, M{sub S}=7.3 and 1897, M{sub S}=7.0) are included for Region 2 (Marmara Region) where a large earthquake is expected in the next years. The 10 different seismic source regions are determined the relationships between the cumulative number-magnitude which estimated a and b parameters with the equation of LogN=a-bM in the Gutenberg-Richter. A homogenous earthquake catalog for M{sub S} magnitude which is equal or larger than 4.0 is used for the time period between 1900 and 2015. The database of catalog used in the study has been created from International Seismological Center (ISC) and Boğazici University Kandilli observation and earthquake research institute (KOERI). The earthquake data were obtained until from 1900 to 1974 from KOERI and ISC until from 1974 to 2015 from KOERI. The probabilities of the earthquake occurring are estimated for the next 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 years in the 10 different seismic source regions. The highest earthquake occur probabilities in 10 different seismic source regions in the next years estimated that the region Tokat-Erzincan (Region 9

  8. Research on response spectrum of dam based on scenario earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoliang; Zhang, Yushan

    2017-10-01

    Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.

  9. Tidal controls on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, S.; Yabe, S.; Tanaka, Y.

    2016-12-01

    The possibility that tidal stresses can trigger earthquakes is a long-standing issue in seismology. Except in some special cases, a causal relationship between seismicity and the phase of tidal stress has been rejected on the basis of studies using many small events. However, recently discovered deep tectonic tremors are highly sensitive to tidal stress levels, with the relationship being governed by a nonlinear law according to which the tremor rate increases exponentially with increasing stress; thus, slow deformation (and the probability of earthquakes) may be enhanced during periods of large tidal stress. Here, we show the influence of tidal stress on seismicity by calculating histories of tidal shear stress during the 2-week period before earthquakes. Very large earthquakes tend to occur near the time of maximum tidal stress, but this tendency is not obvious for small earthquakes. Rather, we found that tidal stress controls the earthquake size-frequency statistics; i.e., the fraction of large events increases (i.e. the b-value of the Gutenberg-Richter relation decreases) as the tidal shear stress increases. This correlation is apparent in data from the global catalog and in relatively homogeneous regional catalogues of earthquakes in Japan. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. Our findings indicate that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. This finding has clear implications for probabilistic earthquake forecasting.

  10. Earthquake precursors: activation or quiescence?

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.

    2011-10-01

    We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These

  11. Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence

    USGS Publications Warehouse

    Beeler, N.M.; Lockner, D.A.

    2003-01-01

    We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.

  12. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  13. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  14. Width of the Surface Rupture Zone for Thrust Earthquakes and Implications for Earthquake Fault Zoning: Chi-Chi 1999 and Wenchuan 2008 Earthquakes

    NASA Astrophysics Data System (ADS)

    Boncio, P.; Caldarella, M.

    2016-12-01

    We analyze the zones of coseismic surface faulting along thrust faults, whit the aim of defining the most appropriate criteria for zoning the Surface Fault Rupture Hazard (SFRH) along thrust faults. Normal and strike-slip faults were deeply studied in the past, while thrust faults were not studied with comparable attention. We analyze the 1999 Chi-Chi, Taiwan (Mw 7.6) and 2008 Wenchuan, China (Mw 7.9) earthquakes. Several different types of coseismic fault scarps characterize the two earthquakes, depending on the topography, fault geometry and near-surface materials. For both the earthquakes, we collected from the literature, or measured in GIS-georeferenced published maps, data about the Width of the coseismic Rupture Zone (WRZ). The frequency distribution of WRZ compared to the trace of the main fault shows that the surface ruptures occur mainly on and near the main fault. Ruptures located away from the main fault occur mainly in the hanging wall. Where structural complexities are present (e.g., sharp bends, step-overs), WRZ is wider then for simple fault traces. We also fitted the distribution of the WRZ dataset with probability density functions, in order to define a criterion to remove outliers (e.g., by selecting 90% or 95% probability) and define the zone where the probability of SFRH is the highest. This might help in sizing the zones of SFRH during seismic microzonation (SM) mapping. In order to shape zones of SFRH, a very detailed earthquake geologic study of the fault is necessary. In the absence of such a very detailed study, during basic (First level) SM mapping, a width of 350-400 m seems to be recommended (95% of probability). If the fault is carefully mapped (higher level SM), one must consider that the highest SFRH is concentrated in a narrow zone, 50 m-wide, that should be considered as a "fault-avoidance (or setback) zone". These fault zones should be asymmetric. The ratio of footwall to hanging wall (FW:HW) calculated here ranges from 1:5 to 1:3.

  15. Probability Assessment of Mega-thrust Earthquakes in Global Subduction Zones -from the View of Slip Deficit-

    NASA Astrophysics Data System (ADS)

    Ikuta, R.; Mitsui, Y.; Ando, M.

    2014-12-01

    We studied inter-plate slip history for about 100 years using earthquake catalogs. On assumption that each earthquake has stick-slip patch centered in its centroid, we regard cumulative seismic slips around the centroid as representing the inter-plate dislocation. We evaluated the slips on the stick-slip patches of over-M5-class earthquakes prior to three recent mega-thrust earthquakes, the 2004 Sumatra (Mw9.2), the 2010 Chile (Mw8.8), and the 2011 Tohoku (Mw9.0) around them. Comparing the cumulative seismic slips with the plate convergence, the slips before the mega-thrust events are significantly short in large area corresponding to the size of the mega-thrust events. We also researched cumulative seismic slips after other three mega-thrust earthquakes occurred in this 100 years, the 1952 Kamchatka (Mw9.0), the 1960 Chile (Mw9.5), the 1964 Alaska (Mw9.2). The cumulative slips have been significantly short in and around the focal area after their occurrence. The result should reflect persistency of the strong or/and large inter-plate coupled area capable of mega-thrust earthquakes. We applied the same procedure to global subduction zones to find that 21 regions including the focal area of above mega-thrust earthquakes show slip deficit over large area corresponding to the size of M9-class earthquakes. Considering that at least six M9-class earthquakes occurred in this 100 years and each recurrence interval should be 500-1000 years, it would not be surprised that from five to ten times of the already known regions (30 to 60 regions) are capable of M9 class earthquakes. The 21 regions as expected M9 class focal areas in our study is less than 5 to 10 times of the known 6, some of these regions may be divided into a few M9 class focal area because they extend to much larger area than typical M9 class focal area.

  16. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  17. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    PubMed

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  18. The Value, Protocols, and Scientific Ethics of Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, Thomas H.

    2013-04-01

    Earthquakes are different from other common natural hazards because precursory signals diagnostic of the magnitude, location, and time of impending seismic events have not yet been found. Consequently, the short-term, localized prediction of large earthquakes at high probabilities with low error rates (false alarms and failures-to-predict) is not yet feasible. An alternative is short-term probabilistic forecasting based on empirical statistical models of seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains up to 1000 relative to long-term forecasts. The value of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing operational forecasting protocols in this sort of "low-probability environment." This paper will explore the complex interrelations among the valuation of low-probability earthquake forecasting, which must account for social intangibles; the protocols of operational forecasting, which must factor in large uncertainties; and the ethics that guide scientists as participants in the forecasting process, who must honor scientific principles without doing harm. Earthquake forecasts possess no intrinsic societal value; rather, they acquire value through their ability to influence decisions made by users seeking to mitigate seismic risk and improve community resilience to earthquake disasters. According to the recommendations of the International Commission on Earthquake Forecasting (www.annalsofgeophysics.eu/index.php/annals/article/view/5350), operational forecasting systems should appropriately separate the hazard-estimation role of scientists from the decision-making role of civil protection authorities and individuals. They should

  19. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  20. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  1. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  2. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  3. Analysis of pre-earthquake ionospheric anomalies before the global M = 7.0+ earthquakes in 2010

    NASA Astrophysics Data System (ADS)

    Yao, Y. B.; Chen, P.; Zhang, S.; Chen, J. J.; Yan, F.; Peng, W. F.

    2012-03-01

    The pre-earthquake ionospheric anomalies that occurred before the global M = 7.0+ earthquakes in 2010 are investigated using the total electron content (TEC) from the global ionosphere map (GIM). We analyze the possible causes of the ionospheric anomalies based on the space environment and magnetic field status. Results show that some anomalies are related to the earthquakes. By analyzing the time of occurrence, duration, and spatial distribution of these ionospheric anomalies, a number of new conclusions are drawn, as follows: earthquake-related ionospheric anomalies are not bound to appear; both positive and negative anomalies are likely to occur; and the earthquake-related ionospheric anomalies discussed in the current study occurred 0-2 days before the associated earthquakes and in the afternoon to sunset (i.e. between 12:00 and 20:00 local time). Pre-earthquake ionospheric anomalies occur mainly in areas near the epicenter. However, the maximum affected area in the ionosphere does not coincide with the vertical projection of the epicenter of the subsequent earthquake. The directions deviating from the epicenters do not follow a fixed rule. The corresponding ionospheric effects can also be observed in the magnetically conjugated region. However, the probability of the anomalies appearance and extent of the anomalies in the magnetically conjugated region are smaller than the anomalies near the epicenter. Deep-focus earthquakes may also exhibit very significant pre-earthquake ionospheric anomalies.

  4. Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering

    USGS Publications Warehouse

    Stein, R.S.; Barka, A.A.; Dieterich, J.H.

    1997-01-01

    10 M ??? 6.7 earthquakes ruptured 1000 km of the North Anatolian fault (Turkey) during 1939-1992, providing an unsurpassed opportunity to study how one large shock sets up the next. We use the mapped surface slip and fault geometry to infer the transfer of stress throughout the sequence. Calculations of the change in Coulomb failure stress reveal that nine out of 10 ruptures were brought closer to failure by the preceding shocks, typically by 1-10 bar, equivalent to 3-30 years of secular stressing. We translate the calculated stress changes into earthquake probability gains using an earthquake-nucleation constitutive relation, which includes both permanent and transient effects of the sudden stress changes. The transient effects of the stress changes dominate during the mean 10 yr period between triggering and subsequent rupturing shocks in the Anatolia sequence. The stress changes result in an average three-fold gain in the net earthquake probability during the decade after each event. Stress is calculated to be high today at several isolated sites along the fault. During the next 30 years, we estimate a 15 per cent probability of a M ??? 6.7 earthquake east of the major eastern centre of Ercinzan, and a 12 per cent probability for a large event south of the major western port city of Izmit. Such stress-based probability calculations may thus be useful to assess and update earthquake hazards elsewhere.

  5. Earthquakes on Your Dinner Table

    NASA Astrophysics Data System (ADS)

    Alexeev, N. A.; Tape, C.; Alexeev, V. A.

    2016-12-01

    Earthquakes have interesting physics applicable to other phenomena like propagation of waves, also, they affect human lives. This study focused on three questions, how: depth, distance from epicenter and ground hardness affect earthquake strength. Experimental setup consisted of a gelatin slab to simulate crust. The slab was hit with a weight and earthquake amplitude was measured. It was found that earthquake amplitude was larger when the epicenter was deeper, which contradicts observations and probably was an artifact of the design. Earthquake strength was inversely proportional to the distance from the epicenter, which generally follows reality. Soft and medium jello were implanted into hard jello. It was found that earthquakes are stronger in softer jello, which was a result of resonant amplification in soft ground. Similar results are found in Minto Flats, where earthquakes are stronger and last longer than in the nearby hills. Earthquakes waveforms from Minto Flats showed that that the oscillations there have longer periods compared to the nearby hills with harder soil. Two gelatin pieces with identical shapes and different hardness were vibrated on a platform at varying frequencies in order to demonstrate that their resonant frequencies are statistically different. This phenomenon also occurs in Yukon Flats.

  6. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  7. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  8. The Psychology of Hazard Risk Perception

    NASA Astrophysics Data System (ADS)

    Thompson, K. F.

    2012-12-01

    A critical step in preparing for natural hazards is understanding the risk: what is the hazard, its likelihood and range of impacts, and what are the vulnerabilities of the community? Any hazard forecast naturally includes a degree of uncertainty, and often these uncertainties are expressed in terms of probabilities. There is often a strong understanding of probability among the physical scientists and emergency managers who create hazard forecasts and issue watches, warnings, and evacuation orders, and often such experts expect similar levels of risk fluency among the general public—indeed, the Working Group on California Earthquake Probabilities (WGCEP) states in the introduction to its earthquake rupture forecast maps that "In daily living, people are used to making decisions based on probabilities—from the flip of a coin (50% probability of heads) to weather forecasts (such as a 30% chance of rain) to the annual chance of being killed by lightning (about 0.0003%)." [1] However, cognitive psychologists have shown in numerous studies [see, e.g., 2-5] that the WGCEP's expectation of probability literacy is inaccurate. People neglect, distort, misjudge, or misuse probability information, even when given strong guidelines about the meaning of numerical or verbally stated probabilities [6]. Even the most ubiquitous of probabilistic information—weather forecasts—are systematically misinterpreted [7]. So while disaster risk analysis and assessment is undoubtedly a critical step in public preparedness and hazard mitigation plans, it is equally important that scientists and practitioners understand the common psychological barriers to accurate probability perception before they attempt to communicate hazard risks to the public. This paper discusses several common, systematic distortions in probability perception and use, including: the influence of personal experience on use of statistical information; temporal discounting and construal level theory; the effect

  9. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to

  10. FORESHOCKS AND TIME-DEPENDENT EARTHQUAKE HAZARD ASSESSMENT IN SOUTHERN CALIFORNIA.

    USGS Publications Warehouse

    Jones, Lucile M.

    1985-01-01

    The probability that an earthquake in southern California (M greater than equivalent to 3. 0) will be followed by an earthquake of larger magnitude within 5 days and 10 km (i. e. , will be a foreshock) is 6 plus or minus 0. 5 per cent (1 S. D. ), and is not significantly dependent on the magnitude of the possible foreshock between M equals 3 and M equals 5. The probability that an earthquake will be followed by an M greater than equivalent to 5. 0 main shock, however, increases with magnitude of the foreshock from less than 1 per cent at M greater than equivalent to 3 to 6. 5 plus or minus 2. 5 per cent (1 S. D. ) at M greater than equivalent to 5. The main shock will most likely occur in the first hour after the foreshock, and the probability that a main shock will occur in the first hour decreases with elapsed time from the occurrence of the possible foreshock by approximately the inverse of time. Thus, the occurrence of an earthquake of M greater than equivalent to 3. 0 in southern California increases the earthquake hazard within a small space-time window several orders of magnitude above the normal background level.

  11. Hotspots, Lifelines, and the Safrr Haywired Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Ratliff, J. L.; Porter, K.

    2014-12-01

    Though California has experienced many large earthquakes (San Francisco, 1906; Loma Prieta, 1989; Northridge, 1994), the San Francisco Bay Area has not had a damaging earthquake for 25 years. Earthquake risk and surging reliance on smartphones and the Internet to handle everyday tasks raise the question: is an increasingly technology-reliant Bay Area prepared for potential infrastructure impacts caused by a major earthquake? How will a major earthquake on the Hayward Fault affect lifelines (roads, power, water, communication, etc.)? The U.S. Geological Survey Science Application for Risk Reduction (SAFRR) program's Haywired disaster scenario, a hypothetical two-year earthquake sequence triggered by a M7.05 mainshock on the Hayward Fault, addresses these and other questions. We explore four geographic aspects of lifeline damage from earthquakes: (1) geographic lifeline concentrations, (2) areas where lifelines pass through high shaking or potential ground-failure zones, (3) areas with diminished lifeline service demand due to severe building damage, and (4) areas with increased lifeline service demand due to displaced residents and businesses. Potential mainshock lifeline vulnerability and spatial demand changes will be discerned by superimposing earthquake shaking, liquefaction probability, and landslide probability damage thresholds with lifeline concentrations and with large-capacity shelters. Intersecting high hazard levels and lifeline clusters represent potential lifeline susceptibility hotspots. We will also analyze possible temporal vulnerability and demand changes using an aftershock shaking threshold. The results of this analysis will inform regional lifeline resilience initiatives and response and recovery planning, as well as reveal potential redundancies and weaknesses for Bay Area lifelines. Identified spatial and temporal hotspots can provide stakeholders with a reference for possible systemic vulnerability resulting from an earthquake sequence.

  12. A 30-year history of earthquake crisis communication in California and lessons for the future

    NASA Astrophysics Data System (ADS)

    Jones, L.

    2015-12-01

    The first statement from the US Geological Survey to the California Office of Emergency Services quantifying the probability of a possible future earthquake was made in October 1985 about the probability (approximately 5%) that a M4.7 earthquake located directly beneath the Coronado Bay Bridge in San Diego would be a foreshock to a larger earthquake. In the next 30 years, publication of aftershock advisories have become routine and formal statements about the probability of a larger event have been developed in collaboration with the California Earthquake Prediction Evaluation Council (CEPEC) and sent to CalOES more than a dozen times. Most of these were subsequently released to the public. These communications have spanned a variety of approaches, with and without quantification of the probabilities, and using different ways to express the spatial extent and the magnitude distribution of possible future events. The USGS is re-examining its approach to aftershock probability statements and to operational earthquake forecasting with the goal of creating pre-vetted automated statements that can be released quickly after significant earthquakes. All of the previous formal advisories were written during the earthquake crisis. The time to create and release a statement became shorter with experience from the first public advisory (to the 1988 Lake Elsman earthquake) that was released 18 hours after the triggering event, but was never completed in less than 2 hours. As was done for the Parkfield experiment, the process will be reviewed by CEPEC and NEPEC (National Earthquake Prediction Evaluation Council) so the statements can be sent to the public automatically. This talk will review the advisories, the variations in wording and the public response and compare this with social science research about successful crisis communication, to create recommendations for future advisories

  13. The Ust'-Kamchatsk "Tsunami Earthquake" of 13 April 1923: A Slow Event and a Probable Landslide

    NASA Astrophysics Data System (ADS)

    Salaree, A.; Okal, E.

    2016-12-01

    Among the "tsunami earthquakes" having generated a larger tsunami than expected from their seismic magnitudes, the large aftershock of the great Kamchatka earthquake of 1923 remains an intriguing puzzle since waves reaching 11 m were reported by Troshin & Diagilev (1926), in the vicinity of the mouth of the Kamchatka River near the coastal settlement of Ust'-Kamchatsk. Our relocation attempts based on ISS-listed travel times would put the earthquake epicenter in Ozernoye Bay, North of the Kamchatka Peninsula, suggesting that it was triggered by stress transfer beyond the plate junction at the Kamchatka corner. Mantle magnitudes obtained from Golitsyn records at De Bilt suggest a long-period moment of 2-3 times 1027 dyn*cm, with a strong increase of moment with period, suggestive of a slow source. However, tsunami simulations based on resulting models of the earthquake source, both North and South of the Kamchatka Peninsula, fail to account for the reported run-up values. On the other hand, the model of an underwater landslide, which would have been triggered by the earthquake, can explain the general amplitude and distribution of reported run-up. This model is supported by the presence of steep bathymetry offshore of Ust'-Kamchatsk, near the area of discharge of the Kamchatka River, and the abundance of subaerial landslides along the nearby coasts of the Kamchatka Peninsula. While the scarcity of scientific data for this ancient earthquake, and of historical reports in a sparsely populated area, keep this interpretation tentative, this study contributes to improving our knowledge of the challenging family of "tsunami earthquakes".

  14. Magnitude and intensity: Measures of earthquake size and severity

    USGS Publications Warehouse

    Spall, Henry

    1982-01-01

    Earthquakes can be measured in terms of either the amount of energy they release (magnitude) or the degree of ground shaking they cause at a particular locality (intensity).  Although magnitude and intensity are basically different measures of an earthquake, they are frequently confused by the public and new reports of earthquakes.  Part of the confusion probably arises from the general similarity of scales used express these quantities.  The various magnitude scales represent logarithmic expressions of the energy released by an earthquake.  Magnitude is calculated from the record made by an earthquake on a calibrated seismograph.  There are no upper or lower limits to magnitude, although no measured earthquakes have exceeded magnitude 8.9.

  15. The California Earthquake Advisory Plan: A history

    USGS Publications Warehouse

    Roeloffs, Evelyn A.; Goltz, James D.

    2017-01-01

    Since 1985, the California Office of Emergency Services (Cal OES) has issued advisory statements to local jurisdictions and the public following seismic activity that scientists on the California Earthquake Prediction Evaluation Council view as indicating elevated probability of a larger earthquake in the same area during the next several days. These advisory statements are motivated by statistical studies showing that about 5% of moderate earthquakes in California are followed by larger events within a 10-km, five-day space-time window (Jones, 1985; Agnew and Jones, 1991; Reasenberg and Jones, 1994). Cal OES issued four earthquake advisories from 1985 to 1989. In October, 1990, the California Earthquake Advisory Plan formalized this practice, and six Cal OES Advisories have been issued since then. This article describes that protocol’s scientific basis and evolution.

  16. The physics of an earthquake

    NASA Astrophysics Data System (ADS)

    McCloskey, John

    2008-03-01

    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  17. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  18. Predecessors of the giant 1960 Chile earthquake

    USGS Publications Warehouse

    Cisternas, M.; Atwater, B.F.; Torrejon, F.; Sawai, Y.; Machuca, G.; Lagos, M.; Eipert, A.; Youlton, C.; Salgado, I.; Kamataki, T.; Shishikura, M.; Rajendran, C.P.; Malik, J.K.; Rizal, Y.; Husni, M.

    2005-01-01

    It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended. ?? 2005 Nature Publishing Group.

  19. Predecessors of the giant 1960 Chile earthquake.

    PubMed

    Cisternas, Marco; Atwater, Brian F; Torrejón, Fernando; Sawai, Yuki; Machuca, Gonzalo; Lagos, Marcelo; Eipert, Annaliese; Youlton, Cristián; Salgado, Ignacio; Kamataki, Takanobu; Shishikura, Masanobu; Rajendran, C P; Malik, Javed K; Rizal, Yan; Husni, Muhammad

    2005-09-15

    It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended.

  20. Issues on the Japanese Earthquake Hazard Evaluation

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Fukushima, Y.; Sagiya, T.

    2013-12-01

    The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in

  1. Geochemical challenge to earthquake prediction.

    PubMed Central

    Wakita, H

    1996-01-01

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665

  2. The Mw 7.7 Bhuj earthquake: Global lessons for earthquake hazard in intra-plate regions

    USGS Publications Warehouse

    Schweig, E.; Gomberg, J.; Petersen, M.; Ellis, M.; Bodin, P.; Mayrose, L.; Rastogi, B.K.

    2003-01-01

    The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.

  3. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  4. Tsunami probability in the Caribbean Region

    USGS Publications Warehouse

    Parsons, T.; Geist, E.L.

    2008-01-01

    We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.

  5. Earthquake prediction: the interaction of public policy and science.

    PubMed Central

    Jones, L M

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

  6. Earthquake in Hindu Kush Region, Afghanistan

    NASA Image and Video Library

    2015-10-27

    On Oct. 26, 2015, NASA Terra spacecraft acquired this image of northeastern Afghanistan where a magnitude 7.5 earthquake struck the Hindu Kush region. The earthquake's epicenter was at a depth of 130 miles (210 kilometers), on a probable shallowly dipping thrust fault. At this location, the Indian subcontinent moves northward and collides with Eurasia, subducting under the Asian continent, and raising the highest mountains in the world. This type of earthquake is common in the area: a similar earthquake occurred 13 years ago about 12 miles (20 kilometers) away. This perspective image from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra spacecraft, looking southwest, shows the hypocenter with a star. The image was acquired July 8, 2015, and is located near 36.4 degrees north, 70.7 degrees east. http://photojournal.jpl.nasa.gov/catalog/PIA20035

  7. Fractals and Forecasting in Earthquakes and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  8. Preferential attachment in evolutionary earthquake networks

    NASA Astrophysics Data System (ADS)

    Rezaei, Soghra; Moghaddasi, Hanieh; Darooneh, Amir Hossein

    2018-04-01

    Earthquakes as spatio-temporal complex systems have been recently studied using complex network theory. Seismic networks are dynamical networks due to addition of new seismic events over time leading to establishing new nodes and links to the network. Here we have constructed Iran and Italy seismic networks based on Hybrid Model and testified the preferential attachment hypothesis for the connection of new nodes which states that it is more probable for newly added nodes to join the highly connected nodes comparing to the less connected ones. We showed that the preferential attachment is present in the case of earthquakes network and the attachment rate has a linear relationship with node degree. We have also found the seismic passive points, the most probable points to be influenced by other seismic places, using their preferential attachment values.

  9. Strongest Earthquake-Prone Areas in Kamchatka

    NASA Astrophysics Data System (ADS)

    Dzeboev, B. A.; Agayan, S. M.; Zharkikh, Yu. I.; Krasnoperov, R. I.; Barykina, Yu. V.

    2018-03-01

    The paper continues the series of our works on recognizing the areas prone to the strongest, strong, and significant earthquakes with the use of the Formalized Clustering And Zoning (FCAZ) intellectual clustering system. We recognized the zones prone to the probable emergence of epicenters of the strongest ( M ≥ 74/3) earthquakes on the Pacific Coast of Kamchatka. The FCAZ-zones are compared to the zones that were recognized in 1984 by the classical recognition method for Earthquake-Prone Areas (EPA) by transferring the criteria of high seismicity from the Andes mountain belt to the territory of Kamchatka. The FCAZ recognition was carried out with two-dimensional and three-dimensional objects of recognition.

  10. Testing an Earthquake Prediction Algorithm: The 2016 New Zealand and Chile Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2017-05-01

    The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquake prediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

  11. An investigation on seismo-ionospheric precursors in various earthquake zones

    NASA Astrophysics Data System (ADS)

    Su, Y.; Liu, J. G.; Chen, M.

    2011-12-01

    Y. C. Su1, J. Y. Liu1 and M. Q. Chen1 1Institute of Space Science, National Central University, Chung-Li,Taiwan. This paper examines the relationships between the ionosphere and earthquakes occurring in different earthquake zones e.g. Malaysia area, Tibet plateau, mid-ocean ridge, Andes, etc., to reveal the possible seismo-ionospheric precursors for these area. Because the lithology, focal mechanism of earthquakes and electrodynamics in the ionosphere at different area are different, it is probable to have diverse ionospheric reactions before large earthquakes occurring in these areas. In addition to statistical analyses on increase or decrease anomalies of the ionospheric electron density few days before large earthquakes, we focus on the seismo-ionospheric precursors for oceanic and land earthquakes as well as for earthquakes with different focal mechanisms.

  12. Frog Swarms: Earthquake Precursors or False Alarms?

    PubMed Central

    Grant, Rachel A.; Conlan, Hilary

    2013-01-01

    Simple Summary Media reports linking unusual animal behaviour with earthquakes can potentially create false alarms and unnecessary anxiety among people that live in earthquake risk zones. Recently large frog swarms in China and elsewhere have been reported as earthquake precursors in the media. By examining international media reports of frog swarms since 1850 in comparison to earthquake data, it was concluded that frog swarms are naturally occurring dispersal behaviour of juveniles and are not associated with earthquakes. However, the media in seismic risk areas may be more likely to report frog swarms, and more likely to disseminate reports on frog swarms after earthquakes have occurred, leading to an apparent link between frog swarms and earthquakes. Abstract In short-term earthquake risk forecasting, the avoidance of false alarms is of utmost importance to preclude the possibility of unnecessary panic among populations in seismic hazard areas. Unusual animal behaviour prior to earthquakes has been reported for millennia but has rarely been scientifically documented. Recently large migrations or unusual behaviour of amphibians have been linked to large earthquakes, and media reports of large frog and toad migrations in areas of high seismic risk such as Greece and China have led to fears of a subsequent large earthquake. However, at certain times of year large migrations are part of the normal behavioural repertoire of amphibians. News reports of “frog swarms” from 1850 to the present day were examined for evidence that this behaviour is a precursor to large earthquakes. It was found that only two of 28 reported frog swarms preceded large earthquakes (Sichuan province, China in 2008 and 2010). All of the reported mass migrations of amphibians occurred in late spring, summer and autumn and appeared to relate to small juvenile anurans (frogs and toads). It was concluded that most reported “frog swarms” are actually normal behaviour, probably caused by

  13. Incubation of Chile's 1960 Earthquake

    NASA Astrophysics Data System (ADS)

    Atwater, B. F.; Cisternas, M.; Salgado, I.; Machuca, G.; Lagos, M.; Eipert, A.; Shishikura, M.

    2003-12-01

    Infrequent occurrence of giant events may help explain how the 1960 Chile earthquake attained M 9.5. Although old documents imply that this earthquake followed great earthquakes of 1575, 1737 and 1837, only three earthquakes of the past 1000 years produced geologic records like those for 1960. These earlier earthquakes include the 1575 event but not 1737 or 1837. Because the 1960 earthquake had nearly twice the seismic slip expected from plate convergence since 1837, much of the strain released in 1960 may have been accumulating since 1575. Geologic evidence for such incubation comes from new paleoseismic findings at the R¡o Maullin estuary, which indents the Pacific coast at 41.5§ S midway along the 1960 rupture. The 1960 earthquake lowered the area by 1.5 m, and the ensuing tsunami spread sand across lowland soils. The subsidence killed forests and changed pastures into sandy tidal flats. Guided by these 1960 analogs, we inferred tsunami and earthquake history from sand sheets, tree rings, and old maps. At Chuyaquen, 10 km upriver from the sea, we studied sand sheets in 31 backhoe pits on a geologic transect 1 km long. Each sheet overlies the buried soil of a former marsh or meadow. The sand sheet from 1960 extends the entire length of the transect. Three earlier sheets can be correlated at least half that far. The oldest one, probably a tsunami deposit, surrounds herbaceous plants that date to AD 990-1160. Next comes a sandy tidal-flat deposit dated by stratigraphic position to about 1000-1500. The penultimate sheet is a tsunami deposit younger than twigs from 1410-1630. It probably represents the 1575 earthquake, whose accounts of shaking, tsunami, and landslides rival those of 1960. In that case, the record excludes the 1737 and 1837 events. The 1737 and 1837 events also appear missing in tree-ring evidence from islands of Misquihue, 30 km upriver from the sea. Here the subsidence in 1960 admitted brackish tidal water that defoliated tens of thousands of

  14. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    NASA Astrophysics Data System (ADS)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  15. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  16. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain

  17. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability

  18. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    NASA Astrophysics Data System (ADS)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source

  19. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  20. Probabilistic Tsunami Hazard Assessment along Nankai Trough (2) a comprehensive assessment including a variety of earthquake source areas other than those that the Earthquake Research Committee, Japanese government (2013) showed

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2016-12-01

    For the forthcoming Nankai earthquake with M8 to M9 class, the Earthquake Research Committee(ERC)/Headquarters for Earthquake Research Promotion, Japanese government (2013) showed 15 examples of earthquake source areas (ESAs) as possible combinations of 18 sub-regions (6 segments along trough and 3 segments normal to trough) and assessed the occurrence probability within the next 30 years (from Jan. 1, 2013) was 60% to 70%. Hirata et al.(2015, AGU) presented Probabilistic Tsunami Hazard Assessment (PTHA) along Nankai Trough in the case where diversity of the next event's ESA is modeled by only the 15 ESAs. In this study, we newly set 70 ESAs in addition of the previous 15 ESAs so that total of 85 ESAs are considered. By producing tens of faults models, with various slip distribution patterns, for each of 85 ESAs, we obtain 2500 fault models in addition of previous 1400 fault models so that total of 3900 fault models are considered to model the diversity of the next Nankai earthquake rupture (Toyama et al.,2015, JpGU). For PTHA, the occurrence probability of the next Nankai earthquake is distributed to possible 3900 fault models in the viewpoint of similarity to the 15 ESAs' extents (Abe et al.,2015, JpGU). A major concept of the occurrence probability distribution is; (i) earthquakes rupturing on any of 15 ESAs that ERC(2013) showed most likely occur, (ii) earthquakes rupturing on any of ESAs whose along-trench extent is the same as any of 15 ESAs but trough-normal extent differs from it second likely occur, (iii) earthquakes rupturing on any of ESAs whose both of along-trough and trough-normal extents differ from any of 15 ESAs rarely occur. Procedures for tsunami simulation and probabilistic tsunami hazard synthesis are the same as Hirata et al (2015). A tsunami hazard map, synthesized under an assumption that the Nankai earthquakes can be modeled as a renewal process based on BPT distribution with a mean recurrence interval of 88.2 years (ERC, 2013) and an

  1. The 2007 Mentawai earthquake sequence on the Sumatra megathrust

    NASA Astrophysics Data System (ADS)

    Konca, A.; Avouac, J.; Sladen, A.; Meltzner, A. J.; Kositsky, A. P.; Sieh, K.; Fang, P.; Li, Z.; Galetzka, J.; Genrich, J.; Chlieh, M.; Natawidjaja, D. H.; Bock, Y.; Fielding, E. J.; Helmberger, D. V.

    2008-12-01

    The Sumatra Megathrust has recently produced a flurry of large interplate earthquakes starting with the giant Mw 9.15, Aceh earthquake of 2004. All of these earthquakes occurred within the area monitored by the Sumatra Geodetic Array (SuGAr), which provided exceptional records of near-field co-seismic and postseismic ground displacements. The most recent of these major earthquakes, an Mw 8.4 earthquake and an Mw 7.9 earthquake twelve hours later, occurred in the Mentawai islands area where devastating historical earthquakes had happened in 1797 and 1833. The 2007 earthquake sequence provides an exceptional opportunity to understand the variability of the earthquakes along megathrusts and their relation to interseismic coupling. The InSAR, GPS and teleseismic modeling shows that 2007 earthquakes ruptured a fraction of the strongly coupled Mentawai patch of the megathrust, which is also only a fraction of the 1833 rupture area. It also released a much smaller moment than the one released in 1833, or than the deficit of moment that has accumulated since. Both earthquakes of 2007 consist of 2 sub-events which are 50 to 100 km apart from each other. On the other hand, the northernmost slip patch of 8.4 and southern slip patch of 7.9 earthquakes abut each other, but they ruptured 12 hours apart. Sunda megathrust earthquakes of recent years include a rupture of a strongly coupled patch that closely mimics a prior rupture of that patch and which is well correlated with the interseismic coupling pattern (Nias-Simeulue section), as well as a rupture sequence of a strongly coupled patch that differs substantially in the details from its most recent predecessors (Mentawai section). We conclude that (1) seismic asperities are probably persistent features which arise form heterogeneous strain build up in the interseismic period; and (2) the same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or cooperate to produce

  2. Modelling the elements of country vulnerability to earthquake disasters.

    PubMed

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  3. Scientific and non-scientific challenges for Operational Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.

    2015-12-01

    Tracking the time evolution of seismic hazard in time windows shorter than the usual 50-years of long-term hazard models may offer additional opportunities to reduce the seismic risk. This is the target of operational earthquake forecasting (OEF). During the OEF development in Italy we identify several challenges that range from pure science to the more practical interface of science with society. From a scientific point of view, although earthquake clustering is the clearest empirical evidence about earthquake occurrence, and OEF clustering models are the most (successfully) tested hazard models in seismology, we note that some seismologists are still reluctant to accept their scientific reliability. After exploring the motivations of these scientific doubts, we also look into an issue that is often overlooked in this discussion, i.e., in any kind of hazard analysis, we do not use a model because it is the true one, but because it is the better than anything else we can think of. The non-scientific aspects are mostly related to the fact that OEF usually provides weekly probabilities of large eartquakes smaller than 1%. These probabilities are considered by some seismologists too small to be of interest or useful. However, in a recent collaboration with engineers we show that such earthquake probabilities may lead to intolerable individual risk of death. Interestingly, this debate calls for a better definition of the still fuzzy boundaries among the different expertise required for the whole risk mitigation process. The last and probably more pressing challenge is related to the communication to the public. In fact, a wrong message could be useless or even counterproductive. Here we show some progresses that we have made in this field working with communication experts in Italy.

  4. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  5. Intermediate-depth earthquakes facilitated by eclogitization-related stresses

    USGS Publications Warehouse

    Nakajima, Junichi; Uchida, Naoki; Shiina, Takahiro; Hasegawa, Akira; Hacker, Bradley R.; Kirby, Stephen H.

    2013-01-01

    Eclogitization of the basaltic and gabbroic layer in the oceanic crust involves a volume reduction of 10%–15%. One consequence of the negative volume change is the formation of a paired stress field as a result of strain compatibility across the reaction front. Here we use waveform analysis of a tiny seismic cluster in the lower crust of the downgoing Pacific plate and reveal new evidence in favor of this mechanism: tensional earthquakes lying 1 km above compressional earthquakes, and earthquakes with highly similar waveforms lying on well-defined planes with complementary rupture areas. The tensional stress is interpreted to be caused by the dimensional mismatch between crust transformed to eclogite and underlying untransformed crust, and the earthquakes are probably facilitated by reactivation of fossil faults extant in the subducting plate. These observations provide seismic evidence for the role of volume change–related stresses and, possibly, fluid-related embrittlement as viable processes for nucleating earthquakes in downgoing oceanic lithosphere.

  6. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

  7. Earthquakes in Oita triggered by the 2016 M7.3 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Yoshida, Shingo

    2016-11-01

    During the passage of the seismic waves from the M7.3 Kumamoto, Kyushu, earthquake on April 16, 2016, a M5.7 [semiofficial value estimated by the Japan Meteorological Agency (JMA)] event occurred in the central part of Oita prefecture, approximately 80 km far away from the mainshock. Although there have been a number of reports that M < 5 earthquakes were remotely triggered during the passage of seismic waves from mainshocks, there has been no evidence for M > 5 triggered events. In this paper, we firstly confirm that this event is a M6-class event by re-estimating the magnitude using the strong-motion records of K-NET and KiK-net, and crustal deformation data at the Yufuin station observed by the Geospatial Information Authority of Japan. Next, by investigating the aftershocks of 45 mainshocks which occurred over the past 20 years based on the JMA earthquake catalog (JMAEC), we found that the delay time of the 2016 M5.7 event in Oita was the shortest. Therefore, the M5.7 event could be regarded as an exceptional M > 5 event that was triggered by passing seismic waves, unlike the usual triggered events and aftershocks. Moreover, a search of the JMAEC shows that in the 2016 Oita aftershock area, swarm earthquake activity was low over the past 30 years compared with neighboring areas. We also found that in the past, probably or possibly triggered events frequently occurred in the 2016 Oita aftershock area. The Oita area readily responds to remote triggering because of high geothermal activity and young volcanism in the area. The M5.7 Oita event was triggered by passing seismic waves, probably because large dynamic stress change was generated by the mainshock at a short distance and because the Oita area was already loaded to a critical stress state without a recent energy release as suggested by the past low swarm activity.[Figure not available: see fulltext.

  8. Predicting the spatial extent of liquefaction from geospatial and earthquake specific parameters

    USGS Publications Warehouse

    Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.

    2014-01-01

    The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.

  9. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  10. An energy dependent earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    Spassiani, I.; Marzocchi, W.

    2017-12-01

    The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.

  11. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  12. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... updates on past topics of discussion, including work with social and behavioral scientists on improving... probabilities; USGS collaborative work with the Collaboratory for Study of Earthquake Predictability (CSEP...

  13. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  14. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  15. Quasi-periodic recurrence of great and giant earthquakes in South-Central Chile inferred from lacustrine turbidite records

    NASA Astrophysics Data System (ADS)

    Strasser, M.; Moernaut, J.; Van Daele, M. E.; De Batist, M. A. O.

    2017-12-01

    Coastal paleoseismic records in south-central Chile indicate that giant megathrust earthquakes -such as in AD1960 (Mw9.5)- occur on average every 300 yrs. Based on geodetic data, it was postulated that the area already has the potential for a Mw8 earthquake. However, to estimate the probability for such a great earthquake from a paleo-perspective, one needs to reconstruct the long-term recurrence pattern of megathrust earthquakes. Here, we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800 yrs. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII½ vs. ≥VI½) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII½) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI½ (139 ±69 yrs). The coefficient of variation (CoV) of inter-event times indicate that the strongest earthquakes recur in a quasi-periodic way (CoV: 0.32) and follow a normal distribution. Including also "smaller" earthquakes (Intensity down to VI½) increases the CoV (0.5) and fits best with a Weibull distribution. Regional correlation of our multi-threshold shaking records with coastal records of tsunami and coseismic subsidence suggests that the intensity ≥VII½ events repeatedly ruptured the same part of the megathrust over a distance of at least 300 km and can be assigned to a Mw ≥ 8.6. We hypothesize that a zone of high plate locking -identified by GPS data and large slip in AD 1960- acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. For the next 110 yrs, we infer an enhanced probability for a Mw 7.7-8.5 earthquake whereas the probability for a Mw ≥ 8.6 (AD1960-like) earthquake remains low.

  16. Archaeoseismology in Algeria: observed damages related to probable past earthquakes on archaeological remains on Roman sites (Tel Atlas of Algeria)

    NASA Astrophysics Data System (ADS)

    Roumane, Kahina; Ayadi, Abdelhakim

    2017-04-01

    The seismological catalogue for Algeria exhibits significant lack for the period before 1365. Some attempts led to retrieve ancient earthquakes evidenced by historical documents and achieves. Archaeoseismology allows a study of earthquakes that have affected archaeological sites, based on the analysis of damage observed on remains. We have focused on the Antiquity period that include Roman, Vandal and Byzantine period from B.C 146 to A.D. 533. This will contribute significantly to the understanding of seismic hazard of the Tell Atlas region known as an earthquake prone area. The Tell Atlas (Algeria) experienced during its history many disastrous earthquakes their impacts are graved on landscape and archaeological monuments. On Roman sites such, Lambaesis (Lambèse), Thamugadi (Timgad) Thibilis (Salaoua Announa) or Thevest (Tebessa), damage were observed on monuments and remains related to seismic events following strong shacking or other ground deformation (subsidence, landslide). Examples of observed damage and disorders on several Roman sites are presented as a contribution to Archaeoseismology in Algeria based on effects of earthquakes on ancient structures and monuments. Keywords : Archaeoseismology. Lambaesis. Drop columns. Aspecelium. Ancient earthquakes

  17. Probabilistic tsunami hazard assessment based on the long-term evaluation of subduction-zone earthquakes along the Sagami Trough, Japan

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Ohsumi, T.; Morikawa, N.; Kawai, S.; Maeda, T.; Matsuyama, H.; Toyama, N.; Kito, T.; Murata, Y.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.; Hakamata, T.

    2017-12-01

    For the forthcoming large earthquakes along the Sagami Trough where the Philippine Sea Plate is subducting beneath the northeast Japan arc, the Earthquake Research Committee(ERC) /Headquarters for Earthquake Research Promotion, Japanese government (2014a) assessed that M7 and M8 class earthquakes will occur there and defined the possible extent of the earthquake source areas. They assessed 70% and 0% 5% of the occurrence probability within the next 30 years (from Jan. 1, 2014), respectively, for the M7 and M8 class earthquakes. First, we set possible 10 earthquake source areas(ESAs) and 920 ESAs, respectively, for M8 and M7 class earthquakes. Next, we constructed 125 characterized earthquake fault models (CEFMs) and 938 CEFMs, respectively, for M8 and M7 class earthquakes, based on "tsunami receipt" of ERC (2017) (Kitoh et al., 2016, JpGU). All the CEFMs are allowed to have a large slip area for expression of fault slip heterogeneity. For all the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. Finally, we re-distributed the occurrence probability to all CEFMs (Abe et al., 2014, JpGU) and gathered excess probabilities for variable tsunami heights, calculated from all the CEFMs, at every observation point along Pacific coast to get PTHA. We incorporated aleatory uncertainties inherent in tsunami calculation and earthquake fault slip heterogeneity. We considered two kinds of probabilistic hazard models; one is "Present-time hazard model" under an assumption that the earthquake occurrence basically follows a renewal process based on BPT distribution if the latest faulting time was known. The other is "Long-time averaged hazard model" under an assumption that earthquake occurrence follows a stationary Poisson process. We fixed our viewpoint, for example, on the probability that the tsunami height will exceed 3 meters at coastal points in next

  18. Scale-invariant structure of energy fluctuations in real earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Chang, Zhe; Wang, Huanyu; Lu, Hong

    2017-11-01

    Earthquakes are obviously complex phenomena associated with complicated spatiotemporal correlations, and they are generally characterized by two power laws: the Gutenberg-Richter (GR) and the Omori-Utsu laws. However, an important challenge has been to explain two apparently contrasting features: the GR and Omori-Utsu laws are scale-invariant and unaffected by energy or time scales, whereas earthquakes occasionally exhibit a characteristic energy or time scale, such as with asperity events. In this paper, three high-quality datasets on earthquakes were used to calculate the earthquake energy fluctuations at various spatiotemporal scales, and the results reveal the correlations between seismic events regardless of their critical or characteristic features. The probability density functions (PDFs) of the fluctuations exhibit evidence of another scaling that behaves as a q-Gaussian rather than random process. The scaling behaviors are observed for scales spanning three orders of magnitude. Considering the spatial heterogeneities in a real earthquake fault, we propose an inhomogeneous Olami-Feder-Christensen (OFC) model to describe the statistical properties of real earthquakes. The numerical simulations show that the inhomogeneous OFC model shares the same statistical properties with real earthquakes.

  19. International Aftershock Forecasting: Lessons from the Gorkha Earthquake

    NASA Astrophysics Data System (ADS)

    Michael, A. J.; Blanpied, M. L.; Brady, S. R.; van der Elst, N.; Hardebeck, J.; Mayberry, G. C.; Page, M. T.; Smoczyk, G. M.; Wein, A. M.

    2015-12-01

    Following the M7.8 Gorhka, Nepal, earthquake of April 25, 2015 the USGS issued a series of aftershock forecasts. The initial impetus for these forecasts was a request from the USAID Office of US Foreign Disaster Assistance to support their Disaster Assistance Response Team (DART) which coordinated US Government disaster response, including search and rescue, with the Government of Nepal. Because of the possible utility of the forecasts to people in the region and other response teams, the USGS released these forecasts publicly through the USGS Earthquake Program web site. The initial forecast used the Reasenberg and Jones (Science, 1989) model with generic parameters developed for active deep continental regions based on the Garcia et al. (BSSA, 2012) tectonic regionalization. These were then updated to reflect a lower productivity and higher decay rate based on the observed aftershocks, although relying on teleseismic observations, with a high magnitude-of-completeness, limited the amount of data. After the 12 May M7.3 aftershock, the forecasts used an Epidemic Type Aftershock Sequence model to better characterize the multiple sources of earthquake clustering. This model provided better estimates of aftershock uncertainty. These forecast messages were crafted based on lessons learned from the Christchurch earthquake along with input from the U.S. Embassy staff in Kathmandu. Challenges included how to balance simple messaging with forecasts over a variety of time periods (week, month, and year), whether to characterize probabilities with words such as those suggested by the IPCC (IPCC, 2010), how to word the messages in a way that would translate accurately into Nepali and not alarm the public, and how to present the probabilities of unlikely but possible large and potentially damaging aftershocks, such as the M7.3 event, which had an estimated probability of only 1-in-200 for the week in which it occurred.

  20. A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.

    2017-01-01

    Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.

  1. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  2. Three-dimensional fluid mapping and earthquake probabilities for induced seismicity sequences

    NASA Astrophysics Data System (ADS)

    Bachmann, C. E.; Wiemer, S.; Woessner, J.

    2010-12-01

    To stimulate the reservoir for a proposed enhanced geothermal system (EGS) project in the City of Basel, approximately 11500 m3 of water were injected at high pressures into a 5 km deep well between December 2nd and 8th, 2006. A six-sensor borehole array, installed by Geothermal Explorers Limited at depths between 50 and 2700 meters around the well to monitor the induced seismicity, recorded some 15000 events during the injection phase, more than 3500 of them locatable. The induced seismicity covers an area of about two square kilometers between 3 and 5 km depth. Water injection was stopped after a widely felt ML 3.4 event that occurred on December 8th. Here, we map in space and time statistical parameters that describe the seismicity, such as the magnitude of completeness, Mc the b- and a- value of the frequency-magnitude distribution and the local probability of large events. We find that the completeness level varies from Mc= 0.5 to Mc=0.8, where the lowest completeness is observed for the shallowest seismicity. Higher b-values are located close to the initiation point of the injection at the casing shoe. With time, and with the gradual expansion of the seismicity, the b-values decrease near the edges of the seismicity cloud. The b-values range from 1.0 to above 2.0; large events occur preferentially in regions of previously low b-value. The local earthquake probabilities for a larger (M3+) event determined from the local a- and b-value show a clear correlation with the occurrence of events in this magnitude range, suggesting that by mapping the local a- and b-values, large magnitude events could be forecasted with greater accuracy than possible when using bulk values only. There are several different hypotheses how to explain induced seismicity, including increasing pore pressure, temperature decrease, volume changes and chemical alterations of fraction surfaces. All of these are linked to the fluid migration within the rock. Previously, high b-values have been

  3. Building Loss Estimation for Earthquake Insurance Pricing

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.

    2005-12-01

    After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.

  4. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may

  5. Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  6. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  7. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    NASA Astrophysics Data System (ADS)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  8. Analysis of the Seismicity Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2016-12-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  9. Evidence for large prehistoric earthquakes in the northern New Madrid Seismic Zone, central United States

    USGS Publications Warehouse

    Li, Y.; Schweig, E.S.; Tuttle, M.P.; Ellis, M.A.

    1998-01-01

    We surveyed the area north of New Madris, Missouri, for prehistoric liquefaction deposits and uncovered two new sites with evidence of pre-1811 earthquakes. At one site, located about 20 km northeast of New Madrid, Missouri, radiocarbon dating indicates that an upper sand blow was probably deposited after A.D. 1510 and a lower sand blow was deposited prior to A.D. 1040. A sand blow at another site about 45 km northeast of New Madrid, Missouri, is dated as likely being deposited between A.D.55 and A.D. 1620 and represents the northernmost recognized expression of prehistoric liquefaction likely related to the New Madrid seismic zone. This study, taken together with other data, supports the occurrence of at least two earthquakes strong enough to indcue liquefaction or faulting before A.D. 1811, and after A.D. 400. One earthquake probably occurred around AD 900 and a second earthquake occurred around A.D. 1350. The data are not yet sufficient to estimate the magnitudes of the causative earthquakes for these liquefaction deposits although we conclude that all of the earthquakes are at least moment magnitude M ~6.8, the size of the 1895 Charleston, Missouri, earthquake. A more rigorous estimate of the number and sizes of prehistoric earthquakes in the New Madrid sesmic zone awaits evaluation of additional sites.

  10. Intraplate earthquakes and the state of stress in oceanic lithosphere

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.

    1986-01-01

    The dominant sources of stress relieved in oceanic intraplate earthquakes are investigated to examine the usefulness of earthquakes as indicators of stress orientation. The primary data for this investigation are the detailed source studies of 58 of the largest of these events, performed with a body-waveform inversion technique of Nabelek (1984). The relationship between the earthquakes and the intraplate stress fields was investigated by studying, the rate of seismic moment release as a function of age, the source mechanisms and tectonic associations of larger events, and the depth-dependence of various source parameters. The results indicate that the earthquake focal mechanisms are empirically reliable indicators of stress, probably reflecting the fact that an earthquake will occur most readily on a fault plane oriented in such a way that the resolved shear stress is maximized while the normal stress across the fault, is minimized.

  11. Earthquake prediction in seismogenic areas of the Iberian Peninsula based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.

    2013-05-01

    A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity

  12. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock

  13. Role of the Internet in Anticipating and Mitigating Earthquake Catastrophes, and the Emergence of Personal Risk Management (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Donnellan, A.; Graves, W.; Tiampo, K. F.; Klein, W.

    2009-12-01

    Risks from natural and financial catastrophes are currently managed by a combination of large public and private institutions. Public institutions usually are comprised of government agencies that conduct studies, formulate policies and guidelines, enforce regulations, and make “official” forecasts. Private institutions include insurance and reinsurance companies, and financial service companies that underwrite catastrophe (“cat”) bonds, and make private forecasts. Although decisions about allocating resources and developing solutions are made by large institutions, the costs of dealing with catastrophes generally fall for the most part on businesses and the general public. Information on potential risks is generally available to the public for some hazards but not others. For example, in the case of weather, private forecast services are provided by www.weather.com and www.wunderground.com. For earthquakes in California (only), the official forecast is the WGCEP-USGS forecast, but provided in a format that is difficult for the public to use. Other privately made forecasts are currently available, for example by the JPL QuakeSim and Russian groups, but these efforts are limited. As more of the world’s population moves increasingly into major seismic zones, new strategies are needed to allow individuals to manage their personal risk from large and damaging earthquakes. Examples include individual mitigation measures such as retrofitting, as well as microinsurance in both developing and developed countries, as well as other financial strategies. We argue that the “long tail” of the internet offers an ideal, and greatly underutilized mechanism to reach out to consumers and to provide them with the information and tools they need to confront and manage seismic hazard and risk on an individual, personalized basis. Information of this type includes not only global hazard forecasts, which are now possible, but also global risk estimation. Additionally

  14. Lessons Learned about Best Practices for Communicating Earthquake Forecasting and Early Warning to Non-Scientific Publics

    NASA Astrophysics Data System (ADS)

    Sellnow, D. D.; Sellnow, T. L.

    2017-12-01

    Earthquake scientists are without doubt experts in understanding earthquake probabilities, magnitudes, and intensities, as well as the potential consequences of them to community infrastructures and inhabitants. One critical challenge these scientific experts face, however, rests with communicating what they know to the people they want to help. Helping scientists translate scientific information to non-scientists is something Drs. Tim and Deanna Sellnow have been committed to for decades. As such, they have compiled a host of data-driven best practices for communicating effectively to non-scientific publics about earthquake forecasting, probabilities, and warnings. In this session, they will summarize what they have learned as it may help earthquake scientists, emergency managers, and other key spokespersons share these important messages to disparate publics in ways that result in positive outcomes, the most important of which is saving lives.

  15. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the

  16. Tectonics of the March 27, 1964, Alaska earthquake: Chapter I in The Alaska earthquake, March 27, 1964: regional effects

    USGS Publications Warehouse

    Plafker, George

    1969-01-01

    The March 27, 1964, earthquake was accomp anied by crustal deformation-including warping, horizontal distortion, and faulting-over probably more than 110,000 square miles of land and sea bottom in south-central Alaska. Regional uplift and subsidence occurred mainly in two nearly parallel elongate zones, together about 600 miles long and as much as 250 miles wide, that lie along the continental margin. From the earthquake epicenter in northern Prince William Sound, the deformation extends eastward 190 miles almost to long 142° and southwestward slightly more than 400 miles to about long 155°. It extends across the two zones from the chain of active volcanoes in the Aleutian Range and Wrangell Mountains probably to the Aleutian Trench axis. Uplift that averages 6 feet over broad areas occurred mainly along the coast of the Gulf of Alaska, on the adjacent Continental Shelf, and probably on the continental slope. This uplift attained a measured maximum on land of 38 feet in a northwest-trending narrow belt less than 10 miles wide that is exposed on Montague Island in southwestern Prince William Sound. Two earthquake faults exposed on Montague Island are subsidiary northwest-dipping reverse faults along which the northwest blocks were relatively displaced a maximum of 26 feet, and both blocks were upthrown relative to sea level. From Montague Island, the faults and related belt of maximum uplift may extend southwestward on the Continental Shelf to the vicinity of the Kodiak group of islands. To the north and northwest of the zone of uplift, subsidence forms a broad asymmetrical downwarp centered over the Kodiak-Kenai-Chugach Mountains that averages 2½ feet and attains a measured maximum of 7½ feet along the southwest coast of the Kenai Peninsula. Maximum indicated uplift in the Alaska and Aleutian Ranges to the north of the zone of subsidence was l½ feet. Retriangulation over roughly 25,000 square miles of the deformed region in and around Prince William Sound

  17. Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region

    NASA Astrophysics Data System (ADS)

    Gülen, L.; Wp2 Team

    2010-12-01

    The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare

  18. Faith after an Earthquake: A Longitudinal Study of Religion and Perceived Health before and after the 2011 Christchurch New Zealand Earthquake

    PubMed Central

    Sibley, Chris G.; Bulbulia, Joseph

    2012-01-01

    On 22 February 2011, Christchurch New Zealand (population 367,700) experienced a devastating earthquake, causing extensive damage and killing one hundred and eighty-five people. The earthquake and aftershocks occurred between the 2009 and 2011 waves of a longitudinal probability sample conducted in New Zealand, enabling us to examine how a natural disaster of this magnitude affected deeply held commitments and global ratings of personal health, depending on earthquake exposure. We first investigated whether the earthquake-affected were more likely to believe in God. Consistent with the Religious Comfort Hypothesis, religious faith increased among the earthquake-affected, despite an overall decline in religious faith elsewhere. This result offers the first population-level demonstration that secular people turn to religion at times of natural crisis. We then examined whether religious affiliation was associated with differences in subjective ratings of personal health. We found no evidence for superior buffering from having religious faith. Among those affected by the earthquake, however, a loss of faith was associated with significant subjective health declines. Those who lost faith elsewhere in the country did not experience similar health declines. Our findings suggest that religious conversion after a natural disaster is unlikely to improve subjective well-being, yet upholding faith might be an important step on the road to recovery. PMID:23227147

  19. Unusual geologic evidence of coeval seismic shaking and tsunamis shows variability in earthquake size and recurrence in the area of the giant 1960 Chile earthquake

    USGS Publications Warehouse

    Cisternas, M.; Garrett, E; Wesson, Robert L.; Dura, T.; Ely, L. L

    2017-01-01

    An uncommon coastal sedimentary record combines evidence for seismic shaking and coincident tsunami inundation since AD 1000 in the region of the largest earthquake recorded instrumentally: the giant 1960 southern Chile earthquake (Mw 9.5). The record reveals significant variability in the size and recurrence of megathrust earthquakes and ensuing tsunamis along this part of the Nazca-South American plate boundary. A 500-m long coastal outcrop on Isla Chiloé, midway along the 1960 rupture, provides continuous exposure of soil horizons buried locally by debris-flow diamicts and extensively by tsunami sand sheets. The diamicts flattened plants that yield geologically precise ages to correlate with well-dated evidence elsewhere. The 1960 event was preceded by three earthquakes that probably resembled it in their effects, in AD 898 - 1128, 1300 - 1398 and 1575, and by five relatively smaller intervening earthquakes. Earthquakes and tsunamis recurred exceptionally often between AD 1300 and 1575. Their average recurrence interval of 85 years only slightly exceeds the time already elapsed since 1960. This inference is of serious concern because no earthquake has been anticipated in the region so soon after the 1960 event, and current plate locking suggests that some segments of the boundary are already capable of producing large earthquakes. This long-term earthquake and tsunami history of one of the world's most seismically active subduction zones provides an example of variable rupture mode, in which earthquake size and recurrence interval vary from one earthquake to the next.

  20. Predicted liquefaction of East Bay fills during a repeat of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Blair, J.L.; Noce, T.E.; Bennett, M.J.

    2006-01-01

    Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M ??? 7.8 earthquake on the northern San Andreas Fault. ?? 2006, Earthquake Engineering Research Institute.

  1. Long aftershock sequences within continents and implications for earthquake hazard assessment.

    PubMed

    Stein, Seth; Liu, Mian

    2009-11-05

    One of the most powerful features of plate tectonics is that the known plate motions give insight into both the locations and average recurrence interval of future large earthquakes on plate boundaries. Plate tectonics gives no insight, however, into where and when earthquakes will occur within plates, because the interiors of ideal plates should not deform. As a result, within plate interiors, assessments of earthquake hazards rely heavily on the assumption that the locations of small earthquakes shown by the short historical record reflect continuing deformation that will cause future large earthquakes. Here, however, we show that many of these recent earthquakes are probably aftershocks of large earthquakes that occurred hundreds of years ago. We present a simple model predicting that the length of aftershock sequences varies inversely with the rate at which faults are loaded. Aftershock sequences within the slowly deforming continents are predicted to be significantly longer than the decade typically observed at rapidly loaded plate boundaries. These predictions are in accord with observations. So the common practice of treating continental earthquakes as steady-state seismicity overestimates the hazard in presently active areas and underestimates it elsewhere.

  2. Statistical validation of earthquake related observations

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  3. Loss Estimations due to Earthquakes and Secondary Technological Hazards

    NASA Astrophysics Data System (ADS)

    Frolova, N.; Larionov, V.; Bonnin, J.

    2009-04-01

    Expected loss and damage assessment due to natural and technological disasters are of primary importance for emergency management just after the disaster, as well as for development and implementation of preventive measures plans. The paper addresses the procedures and simulation models for loss estimations due to strong earthquakes and secondary technological accidents. The mathematical models for shaking intensity distribution, damage to buildings and structures, debris volume, number of fatalities and injuries due to earthquakes and technological accidents at fire and chemical hazardous facilities are considered, which are used in geographical information systems assigned for these purposes. The criteria of technological accidents occurrence are developed on the basis of engineering analysis of past events' consequences. The paper is providing the results of scenario earthquakes consequences estimation and individual seismic risk assessment taking into account the secondary technological hazards at regional and urban levels. The individual risk is understood as the probability of death (or injuries) due to possible hazardous event within one year in a given territory. It is determined through mathematical expectation of social losses taking into account the number of inhabitants in the considered settlement and probability of natural and/or technological disaster.

  4. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  5. Paleo-earthquake timing on the North Anatolian Fault: Where, when, and how sure are we?

    NASA Astrophysics Data System (ADS)

    Fraser, J.; Vanneste, K.; Hubert-Ferrari, A.

    2009-04-01

    The North Anatolian Fault (NAF) traces from the Karilova Triple Junction in the east 1400km into the Aegean Sea in the west, forming a northwardly convex arch across northern Turkey. In the 20th century the NAF ruptured in an approximate east to west migrating sequence of large, destructive and deadly earthquakes. This migrating sequence suggests a simple relationship between crustal loading and fault rupture. A primary question remains unclear: Does the NAF always rupture in episodic bursts? To address this question we have reanalysed selected pre-existing paleoseismic investigations (PIs), from along the NAF, using Bayesian statistical modelling to determine a standardised record of the temporal probability distribution of earthquakes. A wealth of paleoseismic records have accumulated over recent years concerning the NAF although sadly much research remains un-published. A significant output of this study is tabulated results from all of the existing published paleoseismic studies on the NAF with recalibration of the radiocarbon ages using standardized methodology and standardized error reporting by determining the earthquake probability rather than using errors associated with individual bounding dates. We followed the approach outlined in Biasi & Weldon (1994) and in Biasi et al. (2002) to calculate the actual probability density distributions for the timing of paleoseismic events and for the recurrence intervals. Our implementation of these algorithms is reasonably fast and yields PDFs that are comparable to but smoother than those obtained by Markov Chain Monte Carlo type simulations (e.g., OxCal, Bronk-Ramsey, 2007). Additionally we introduce three new earthquake records from PIs we have conducted in spatial gaps in the existing data. By presenting all of this earthquake data we hope to focus further studies and help to define the distribution of earthquake risk. Because of the long historical record of earthquakes in Turkey, we can begin to address some

  6. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  7. The Collapse of Ancient Societies by Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Nur, A. M.

    2001-12-01

    Although earthquakes have often been associated with inexplicable past societal disasters their impact has thought to be only secondary for two reasons: Inconclusive archaeological interpretation of excavated destruction, and misconceptions about patterns of seismicity. However, new and revised archaeological evidence and a better understanding of the irregularities of the time-space patterns of large earthquakes together suggest that earthquakes (and associated tsunamis) have probably been responsible for some of the great and enigmatic catastrophes in ancient times. The most relevant aspect of seismicity is the episodic time-space clustering of earthquakes such as during the eastern Mediterranean seismic crisis in the second half of the 4th century AD and the seismicity of the north Anatolian fault during our century. During these earthquake clusters, plate boundary rupture by a series of large earthquakes that occur over a period of only 50 to 100 years or so, followed by hundreds or even thousands of years of relative inactivity. The extent of the destruction by such rare but powerful earthquake clusters must have been far greater than similar modern events due to poorer construction and the lack of any earthquake preparedness in ancient times. The destruction by very big earthquakes also made ancient societies so vulnerable because so much of the wealth and power was concentrated and protected by so few. Thus the breaching by an earthquake of the elite's fortified cities must have often led to attacks by (1) external enemies during ongoing wars (e.g., Joshua and Jericho, Arab attack on Herod's Jerusalem in 31 BCE); (2) neighbors during ongoing conflicts (e.g., Mycenea's fall in @1200 BCE, Saul's battle at Michmash @1020 BCE); and (3) uprisings of poor and often enslaved indigenous populations (e.g., Sparta and the Helots @465 BCE, Hattusas @1200 BCE?, Teotihuacan @ 700 AD). When the devastation was by a local earthquake, during a modest conflict, damage was

  8. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  9. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  10. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    NASA Astrophysics Data System (ADS)

    Plotkin, V. V.

    A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with that of investigators, which studied these phenomena with several observational techniques. However it is possible, that such TEC changes are simultaneously accompanied by changes due to solar wind parameters and Kp -index.

  11. Effects of the March 1964 Alaska earthquake on glaciers: Chapter D in The Alaska earthquake, March 27, 1964: effects on hydrologic regimen

    USGS Publications Warehouse

    Post, Austin

    1967-01-01

    The 1964 Alaska earthquake occurred in a region where there are many hundreds of glaciers, large and small. Aerial photographic investigations indicate that no snow and ice avalanches of large size occurred on glaciers despite the violent shaking. Rockslide avalanches extended onto the glaciers in many localities, seven very large ones occurring in the Copper River region 160 kilometers east of the epicenter. Some of these avalanches traveled several kilometers at low gradients; compressed air may have provided a lubricating layer. If long-term changes in glaciers due to tectonic changes in altitude and slope occur, they will probably be very small. No evidence of large-scale dynamic response of any glacier to earthquake shaking or avalanche loading was found in either the Chugach or Kenai Mountains 16 months after the 1964 earthquake, nor was there any evidence of surges (rapid advances) as postulated by the Earthquake-Advance Theory of Tarr and Martin.

  12. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  13. The earthquake prediction experiment at Parkfield, California

    USGS Publications Warehouse

    Roeloffs, E.; Langbein, J.

    1994-01-01

    Since 1985, a focused earthquake prediction experiment has been in progress along the San Andreas fault near the town of Parkfield in central California. Parkfield has experienced six moderate earthquakes since 1857 at average intervals of 22 years, the most recent a magnitude 6 event in 1966. The probability of another moderate earthquake soon appears high, but studies assigning it a 95% chance of occurring before 1993 now appear to have been oversimplified. The identification of a Parkfield fault "segment" was initially based on geometric features in the surface trace of the San Andreas fault, but more recent microearthquake studies have demonstrated that those features do not extend to seismogenic depths. On the other hand, geodetic measurements are consistent with the existence of a "locked" patch on the fault beneath Parkfield that has presently accumulated a slip deficit equal to the slip in the 1966 earthquake. A magnitude 4.7 earthquake in October 1992 brought the Parkfield experiment to its highest level of alert, with a 72-hour public warning that there was a 37% chance of a magnitude 6 event. However, this warning proved to be a false alarm. Most data collected at Parkfield indicate that strain is accumulating at a constant rate on this part of the San Andreas fault, but some interesting departures from this behavior have been recorded. Here we outline the scientific arguments bearing on when the next Parkfield earthquake is likely to occur and summarize geophysical observations to date.

  14. Bi-directional volcano-earthquake interaction at Mauna Loa Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Amelung, F.

    2004-12-01

    At Mauna Loa volcano, Hawaii, large-magnitude earthquakes occur mostly at the west flank (Kona area), at the southeast flank (Hilea area), and at the east flank (Kaoiki area). Eruptions at Mauna Loa occur mostly at the summit region and along fissures at the southwest rift zone (SWRZ), or at the northeast rift zone (NERZ). Although historic earthquakes and eruptions at these zones appear to correlate in space and time, the mechanisms and implications of an eruption-earthquake interaction was not cleared. Our analysis of available factual data reveals the highly statistical significance of eruption-earthquake pairs, with a random probability of 5-to-15 percent. We clarify this correlation with the help of elastic stress-field models, where (i) we simulate earthquakes and calculate the resulting normal stress change at volcanic active zones of Mauna Loa, and (ii) we simulate intrusions in Mauna Loa and calculate the Coulomb stress change at the active fault zones. Our models suggest that Hilea earthquakes encourage dike intrusion in the SWRZ, Kona earthquakes encourage dike intrusion at the summit and in the SWRZ, and Kaoiki earthquakes encourage dike intrusion in the NERZ. Moreover, a dike in the SWRZ encourages earthquakes in the Hilea and Kona areas. A dike in the NERZ may encourage and discourage earthquakes in the Hilea and Kaoiki areas. The modeled stress change patterns coincide remarkably with the patterns of several historic eruption-earthquake pairs, clarifying the mechanisms of bi-directional volcano-earthquake interaction for Mauna Loa. The results imply that at Mauna Loa volcanic activity influences the timing and location of earthquakes, and that earthquakes influence the timing, location and the volume of eruptions. In combination with near real-time geodetic and seismic monitoring, these findings may improve volcano-tectonic risk assessment.

  15. The Academic Impact of Natural Disasters: Evidence from L'Aquila Earthquake

    ERIC Educational Resources Information Center

    Di Pietro, Giorgio

    2018-01-01

    This paper uses a standard difference-in-differences approach to examine the effect of the L'Aquila earthquake on the academic performance of the students of the local university. The empirical results indicate that this natural disaster reduced students' probability of graduating on-time and slightly increased students' probability of dropping…

  16. Earthquake hazard analysis for the different regions in and around Ağrı

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg–Richter magnitude–frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency–magnitude Gutenberg–Richter relationship, from the maximum likelihoodmore » method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.« less

  17. Earthquake hazard analysis for the different regions in and around Aǧrı

    NASA Astrophysics Data System (ADS)

    Bayrak, Erdem; Yilmaz, Şeyda; Bayrak, Yusuf

    2016-04-01

    We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg-Richter magnitude-frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency-magnitude Gutenberg-Richter relationship, from the maximum likelihood method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.

  18. The use of waveform shapes to automatically determine earthquake focal depth

    USGS Publications Warehouse

    Sipkin, S.A.

    2000-01-01

    Earthquake focal depth is an important parameter for rapidly determining probable damage caused by a large earthquake. In addition, it is significant both for discriminating between natural events and explosions and for discriminating between tsunamigenic and nontsunamigenic earthquakes. For the purpose of notifying emergency management and disaster relief organizations as well as issuing tsunami warnings, potential time delays in determining source parameters are particularly detrimental. We present a method for determining earthquake focal depth that is well suited for implementation in an automated system that utilizes the wealth of broadband teleseismic data that is now available in real time from the global seismograph networks. This method uses waveform shapes to determine focal depth and is demonstrated to be valid for events with magnitudes as low as approximately 5.5.

  19. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  20. Anomalous behavior of the ionosphere before strong earthquakes

    NASA Astrophysics Data System (ADS)

    Peddi Naidu, P.; Madhavi Latha, T.; Madhusudhana Rao, D. N.; Indira Devi, M.

    2017-12-01

    In the recent years, the seismo-ionospheric coupling has been studied using various ionospheric parameters like Total Electron Content, Critical frequencies, Electron density and Phase and amplitude of Very Low Frequency waves. The present study deals with the behavior of the ionosphere in the pre-earthquake period of 3-4 days at various stations adopting the critical frequencies of Es and F2 layers. The relative phase measurements of 16 kHz VLF wave transmissions from Rugby (UK), received at Visakhapatnam (India) are utilized to study the D-region during the seismically active periods. The results show that, f0Es increases a few hours before the time of occurrence of the earthquake and day time values f0F2 are found to be high during the sunlit hours in the pre-earthquake period of 2-3 days. Anomalous VLF phase fluctuations are observed during the sunset hours before the earthquake event. The results are discussed in the light of the probable mechanism proposed by previous investigators.

  1. Dynamic stress changes during earthquake rupture

    USGS Publications Warehouse

    Day, S.M.; Yu, G.; Wald, D.J.

    1998-01-01

    We assess two competing dynamic interpretations that have been proposed for the short slip durations characteristic of kinematic earthquake models derived by inversion of earthquake waveform and geodetic data. The first interpretation would require a fault constitutive relationship in which rapid dynamic restrengthening of the fault surface occurs after passage of the rupture front, a hypothesized mechanical behavior that has been referred to as "self-healing." The second interpretation would require sufficient spatial heterogeneity of stress drop to permit rapid equilibration of elastic stresses with the residual dynamic friction level, a condition we refer to as "geometrical constraint." These interpretations imply contrasting predictions for the time dependence of the fault-plane shear stresses. We compare these predictions with dynamic shear stress changes for the 1992 Landers (M 7.3), 1994 Northridge (M 6.7), and 1995 Kobe (M 6.9) earthquakes. Stress changes are computed from kinematic slip models of these earthquakes, using a finite-difference method. For each event, static stress drop is highly variable spatially, with high stress-drop patches embedded in a background of low, and largely negative, stress drop. The time histories of stress change show predominantly monotonic stress change after passage of the rupture front, settling to a residual level, without significant evidence for dynamic restrengthening. The stress change at the rupture front is usually gradual rather than abrupt, probably reflecting the limited resolution inherent in the underlying kinematic inversions. On the basis of this analysis, as well as recent similar results obtained independently for the Kobe and Morgan Hill earthquakes, we conclude that, at the present time, the self-healing hypothesis is unnecessary to explain earthquake kinematics.

  2. Posttraumatic growth and reduced suicidal ideation among adolescents at month 1 after the Sichuan Earthquake.

    PubMed

    Yu, Xiao-nan; Lau, Joseph T F; Zhang, Jianxin; Mak, Winnie W S; Choi, Kai Chow; Lui, Wacy W S; Zhang, Jianxin; Chan, Emily Y Y

    2010-06-01

    This study investigated posttraumatic growth (PTG) and reduced suicidal ideation among Chinese adolescents at one month after the occurrence of the Sichuan Earthquake. A cross-sectional survey was administered to 3324 high school students in Chengdu, Sichuan. The revised Posttraumatic Growth Inventory for Children and the Children's Revised Impact of Event Scale assessed PTG and posttraumatic stress disorder (PTSD), respectively. Multivariate analysis showed that being in junior high grade 2, having probable PTSD, visiting affected areas, possessing a perceived sense of security from teachers, and being exposed to touching news reports and encouraging news reports were associated with probable PTG; the reverse was true for students in senior high grade 1 or senior high grade 2 who had experienced prior adversities. Among the 623 students (19.3% of all students) who had suicidal ideation prior to the earthquake, 57.4% self-reported reduced suicidal ideation when the pre-earthquake and post-earthquake situations were compared. Among these 623 students, the multivariate results showed that being females, perceived sense of security obtained from teachers and exposure to encouraging news reports were factors associated with reduced suicidal ideation; the reverse was true for experience of pre-earthquake corporal punishment and worry about severe earthquakes in the future. The study population was not directly hit by the earthquake. This study is cross-sectional and no baseline data were collected prior to the occurrence of the earthquake. The earthquake resulted in PTG and reduced suicidal ideation among adolescents. PTSD was associated with PTG. Special attention should be paid to teachers' support, contents of media reports, and students' experience of prior adversities. Copyright 2009 Elsevier B.V. All rights reserved.

  3. Evidence for Late Holocene earthquakes on the Utsalady Point fault, Northern Puget Lowland, Washington

    USGS Publications Warehouse

    Johnson, S.Y.; Nelson, A.R.; Personius, S.F.; Wells, R.E.; Kelsey, H.M.; Sherrod, B.L.; Okumura, K.; Koehler, R.; Witter, R.C.; Bradley, L.A.; Harding, D.J.

    2004-01-01

    Trenches across the Utsalady Point fault in the northern Puget Lowland of Washington reveal evidence of at least one and probably two late Holocene earthquakes. The "Teeka" and "Duffers" trenches were located along a 1.4-km-long, 1-to 4-m-high, northwest-trending, southwest-facing, topographic scarp recognized from Airborne Laser Swath Mapping. Glaciomarine drift exposed in the trenches reveals evidence of about 95 to 150 cm of vertical and 200 to 220 cm of left-lateral slip in the Teeka trench. Radiocarbon ages from a buried soil A horizon and overlying slope colluvium along with the historical record of earthquakes suggest that this faulting occurred 100 to 400 calendar years B.P. (A.D. 1550 to 1850). In the Duffers trench, 370 to 450 cm of vertical separation is accommodated by faulting (???210 cm) and folding (???160 to 240 cm), with probable but undetermined amounts of lateral slip. Stratigraphic relations and radiocarbon ages from buried soil, colluvium, and fissure fill in the hanging wall suggest the deformation at Duffers is most likely from two earthquakes that occurred between 100 to 500 and 1100 to 2200 calendar years B.P., but deformation during a single earthquake is also possible. For the two-earthquake hypothesis, deformation at Teeka trench in the first event involved folding but not faulting. Regional relations suggest that the earthquake(s) were M ??? ???6.7 and that offshore rupture may have produced tsunamis. Based on this investigation and related recent studies, the maximum recurrence interval for large ground-rupturing crustal-fault earthquakes in the Puget Lowland is about 400 to 600 years or less.

  4. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  5. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency

  6. Global observation of Omori-law decay in the rate of triggered earthquakes

    NASA Astrophysics Data System (ADS)

    Parsons, T.

    2001-12-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

  7. Earthquakes of Garhwal Himalaya region of NW Himalaya, India: A study of relocated earthquakes and their seismogenic source and stress

    NASA Astrophysics Data System (ADS)

    R, A. P.; Paul, A.; Singh, S.

    2017-12-01

    Since the continent-continent collision 55 Ma, the Himalaya has accommodated 2000 km of convergence along its arc. The strain energy is being accumulated at a rate of 37-44 mm/yr and releases at time as earthquakes. The Garhwal Himalaya is located at the western side of a Seismic Gap, where a great earthquake is overdue atleast since 200 years. This seismic gap (Central Seismic Gap: CSG) with 52% probability for a future great earthquake is located between the rupture zones of two significant/great earthquakes, viz. the 1905 Kangra earthquake of M 7.8 and the 1934 Bihar-Nepal earthquake of M 8.0; and the most recent one, the 2015 Gorkha earthquake of M 7.8 is in the eastern side of this seismic gap (CSG). The Garhwal Himalaya is one of the ideal locations of the Himalaya where all the major Himalayan structures and the Himalayan Seimsicity Belt (HSB) can ably be described and studied. In the present study, we are presenting the spatio-temporal analysis of the relocated local micro-moderate earthquakes, recorded by a seismicity monitoring network, which is operational since, 2007. The earthquake locations are relocated using the HypoDD (double difference hypocenter method for earthquake relocations) program. The dataset from July, 2007- September, 2015 have been used in this study to estimate their spatio-temporal relationships, moment tensor (MT) solutions for the earthquakes of M>3.0, stress tensors and their interactions. We have also used the composite focal mechanism solutions for small earthquakes. The majority of the MT solutions show thrust type mechanism and located near the mid-crustal-ramp (MCR) structure of the detachment surface at 8-15 km depth beneath the outer lesser Himalaya and higher Himalaya regions. The prevailing stress has been identified to be compressional towards NNE-SSW, which is the direction of relative plate motion between the India and Eurasia continental plates. The low friction coefficient estimated along with the stress inversions

  8. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model

  9. Shallow slip amplification and enhanced tsunami hazard unravelled by dynamic simulations of mega-thrust earthquakes

    PubMed Central

    Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.

    2016-01-01

    The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733

  10. Offshore Earthquakes Do Not Influence Marine Mammal Stranding Risk on the Washington and Oregon Coasts

    PubMed Central

    Grant, Rachel A.; Savirina, Anna

    2018-01-01

    Simple Summary Marine mammals stranding on coastal beaches is not unusual. However, there appears to be no single cause for this, with several causes being probable, such as starvation, contact with humans (for example boat strike or entanglement with fishing gear), disease, and parasitism. We evaluated marine mammal stranding off the Washington and Oregon coasts and looked at offshore earthquakes as a possible contributing factor. Our analysis showed that offshore earthquakes did not make marine mammals more likely to strand. We also analysed a subset of data from the north of Washington State and found that non-adult animals made up a large proportion of stranded animals, and for dead animals the commonest cause of death was disease, traumatic injury, or starvation. Abstract The causes of marine mammals stranding on coastal beaches are not well understood, but may relate to topography, currents, wind, water temperature, disease, toxic algal blooms, and anthropogenic activity. Offshore earthquakes are a source of intense sound and disturbance and could be a contributing factor to stranding probability. We tested the hypothesis that the probability of marine mammal stranding events on the coasts of Washington and Oregon, USA is increased by the occurrence of offshore earthquakes in the nearby Cascadia subduction zone. The analysis carried out here indicated that earthquakes are at most, a very minor predictor of either single, or large (six or more animals) stranding events, at least for the study period and location. We also tested whether earthquakes inhibit stranding and again, there was no link. Although we did not find a substantial association of earthquakes with strandings in this study, it is likely that there are many factors influencing stranding of marine mammals and a single cause is unlikely to be responsible. Analysis of a subset of data for which detailed descriptions were available showed that most live stranded animals were pups, calves, or

  11. 12 May 2008 M = 7.9 Wenchuan, China, earthquake calculated to increase failure stress and seismicity rate on three major fault systems

    USGS Publications Warehouse

    Toda, S.; Lin, J.; Meghraoui, M.; Stein, R.S.

    2008-01-01

    The Wenchuan earthquake on the Longmen Shan fault zone devastated cities of Sichuan, claiming at least 69,000 lives. We calculate that the earthquake also brought the Xianshuihe, Kunlun and Min Jiang faults 150-400 km from the mainshock rupture in the eastern Tibetan Plateau 0.2-0.5 bars closer to Coulomb failure. Because some portions of these stressed faults have not ruptured in more than a century, the earthquake could trigger or hasten additional M > 7 earthquakes, potentially subjecting regions from Kangding to Daofu and Maqin to Rangtag to strong shaking. We use the calculated stress changes and the observed background seismicity to forecast the rate and distribution of damaging shocks. The earthquake probability in the region is estimated to be 57-71% for M ??? 6 shocks during the next decade, and 8-12% for M ??? 7 shocks. These are up to twice the probabilities for the decade before the Wenchuan earthquake struck. Copyright 2008 by the American Geophysical Union.

  12. Decision making biases in the communication of earthquake risk

    NASA Astrophysics Data System (ADS)

    Welsh, M. B.; Steacy, S.; Begg, S. H.; Navarro, D. J.

    2015-12-01

    L'Aquila, with 6 scientists convicted of manslaughter, shocked the scientific community, leading to urgent re-appraisal of communication methods for low-probability, high-impact events. Before the trial, a commission investigating the earthquake recommended risk assessment be formalised via operational earthquake forecasts and that social scientists be enlisted to assist in developing communication strategies. Psychological research has identified numerous decision biases relevant to this, including hindsight bias, where people (after the fact) overestimate an event's predictability. This affects experts as well as naïve participants as it relates to their ability to construct a plausible causal story rather than the likelihood of the event. Another problem is availability, which causes overestimation of the likelihood of observed rare events due to their greater noteworthiness. This, however, is complicated by the 'description-experience' gap, whereby people underestimate probabilities for events they have not experienced. That is, people who have experienced strong earthquakes judge them more likely while those who have not judge them less likely - relative to actual probabilities. Finally, format changes alter people's decisions. That is people treat '1 in 10,000' as different from 0.01% despite their mathematical equivalence. Such effects fall under the broad term framing, which describes how different framings of the same event alter decisions. In particular, people's attitude to risk depends significantly on how scenarios are described. We examine the effect of biases on the communication of change in risk. South Australian participants gave responses to scenarios describing familiar (bushfire) or unfamiliar (earthquake) risks. While bushfires are rare in specific locations, significant fire events occur each year and are extensively covered. By comparison, our study location (Adelaide) last had a M5 quake in 1954. Preliminary results suggest the description

  13. Time-decreasing hazard and increasing time until the next earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corral, Alvaro

    2005-01-01

    The existence of a slowly always decreasing probability density for the recurrence times of earthquakes in the stationary case implies that the occurrence of an event at a given instant becomes more unlikely as time since the previous event increases. Consequently, the expected waiting time to the next earthquake increases with the elapsed time, that is, the event moves away fast to the future. We have found direct empirical evidence of this counterintuitive behavior in two worldwide catalogs as well as in diverse regional catalogs. Universal scaling functions describe the phenomenon well.

  14. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  15. Offshore Earthquakes Do Not Influence Marine Mammal Stranding Risk on the Washington and Oregon Coasts.

    PubMed

    Grant, Rachel A; Savirina, Anna; Hoppitt, Will

    2018-01-26

    The causes of marine mammals stranding on coastal beaches are not well understood, but may relate to topography, currents, wind, water temperature, disease, toxic algal blooms, and anthropogenic activity. Offshore earthquakes are a source of intense sound and disturbance and could be a contributing factor to stranding probability. We tested the hypothesis that the probability of marine mammal stranding events on the coasts of Washington and Oregon, USA is increased by the occurrence of offshore earthquakes in the nearby Cascadia subduction zone. The analysis carried out here indicated that earthquakes are at most, a very minor predictor of either single, or large (six or more animals) stranding events, at least for the study period and location. We also tested whether earthquakes inhibit stranding and again, there was no link. Although we did not find a substantial association of earthquakes with strandings in this study, it is likely that there are many factors influencing stranding of marine mammals and a single cause is unlikely to be responsible. Analysis of a subset of data for which detailed descriptions were available showed that most live stranded animals were pups, calves, or juveniles, and in the case of dead stranded mammals, the commonest cause of death was trauma, disease, and emaciation.

  16. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Image and Video Library

    2001-03-30

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul. http://photojournal.jpl.nasa.gov/catalog/PIA00557

  17. Spatial modeling for estimation of earthquakes economic loss in West Java

    NASA Astrophysics Data System (ADS)

    Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma

    2017-07-01

    Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.

  18. Probabilistic liquefaction hazard analysis at liquefied sites of 1956 Dunaharaszti earthquake, in Hungary

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor

    2017-04-01

    Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located

  19. Integration of paleoseismic data from multiple sites to develop an objective earthquake chronology: Application to the Weber segment of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    DuRoss, Christopher B.; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Lund, William R.

    2011-01-01

    We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7–1.9-kyr estimated two-sigma [2δ] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.

  20. Integration of paleoseismic data from multiple sites to develop an objective earthquake chronology: Application to the Weber segment of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    DuRoss, C.B.; Personius, S.F.; Crone, A.J.; Olig, S.S.; Lund, W.R.

    2011-01-01

    We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7-1.9-kyr estimated two-sigma [2??] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.

  1. Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps

    NASA Astrophysics Data System (ADS)

    Brooks, E. M.; Stein, S. A.; Spencer, B. D.

    2015-12-01

    The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.

  2. Significant earthquakes on the Enriquillo fault system, Hispaniola, 1500-2010: Implications for seismic hazard

    USGS Publications Warehouse

    Bakun, William H.; Flores, Claudia H.; ten Brink, Uri S.

    2012-01-01

    Historical records indicate frequent seismic activity along the north-east Caribbean plate boundary over the past 500 years, particularly on the island of Hispaniola. We use accounts of historical earthquakes to assign intensities and the intensity assignments for the 2010 Haiti earthquakes to derive an intensity attenuation relation for Hispaniola. The intensity assignments and the attenuation relation are used in a grid search to find source locations and magnitudes that best fit the intensity assignments. Here we describe a sequence of devastating earthquakes on the Enriquillo fault system in the eighteenth century. An intensity magnitude MI 6.6 earthquake in 1701 occurred near the location of the 2010 Haiti earthquake, and the accounts of the shaking in the 1701 earthquake are similar to those of the 2010 earthquake. A series of large earthquakes migrating from east to west started with the 18 October 1751 MI 7.4–7.5 earthquake, probably located near the eastern end of the fault in the Dominican Republic, followed by the 21 November 1751 MI 6.6 earthquake near Port-au-Prince, Haiti, and the 3 June 1770 MI 7.5 earthquake west of the 2010 earthquake rupture. The 2010 Haiti earthquake may mark the beginning of a new cycle of large earthquakes on the Enriquillo fault system after 240 years of seismic quiescence. The entire Enriquillo fault system appears to be seismically active; Haiti and the Dominican Republic should prepare for future devastating earthquakes.

  3. Local seismicity preceding the March 14, 1979, Petatlan, Mexico Earthquake (Ms = 7.6)

    NASA Astrophysics Data System (ADS)

    Hsu, Vindell; Gettrust, Joseph F.; Helsley, Charles E.; Berg, Eduard

    1983-05-01

    aftershocks in their spatial distribution. This suggests that an asperity existing along the Benioff zone may have affected both the pre-main shock activity in the continental lithosphere and the aftershocks along the Benioff zone. Although major thrust earthquakes at trenches occur along Benioff zones, in the present study we find little activity on this interplate boundary before the Petatlan earthquake. The overlying continental block, on the contrary, is very active seismically. Our data suggest that the activity is probably governed by the stress transmitted from below due to coupling between two plates and the heterogeneity within the continental lithosphere. The continental material is probably the more likely place for precursors.

  4. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  5. Antarctic icequakes triggered by the 2010 Maule earthquake in Chile

    NASA Astrophysics Data System (ADS)

    Peng, Zhigang; Walter, Jacob I.; Aster, Richard C.; Nyblade, Andrew; Wiens, Douglas A.; Anandakrishnan, Sridhar

    2014-09-01

    Seismic waves from distant, large earthquakes can almost instantaneously trigger shallow micro-earthquakes and deep tectonic tremor as they pass through Earth's crust. Such remotely triggered seismic activity mostly occurs in tectonically active regions. Triggered seismicity is generally considered to reflect shear failure on critically stressed fault planes and is thought to be driven by dynamic stress perturbations from both Love and Rayleigh types of surface seismic wave. Here we analyse seismic data from Antarctica in the six hours leading up to and following the 2010 Mw 8.8 Maule earthquake in Chile. We identify many high-frequency seismic signals during the passage of the Rayleigh waves generated by the Maule earthquake, and interpret them as small icequakes triggered by the Rayleigh waves. The source locations of these triggered icequakes are difficult to determine owing to sparse seismic network coverage, but the triggered events generate surface waves, so are probably formed by near-surface sources. Our observations are consistent with tensile fracturing of near-surface ice or other brittle fracture events caused by changes in volumetric strain as the high-amplitude Rayleigh waves passed through. We conclude that cryospheric systems can be sensitive to large distant earthquakes.

  6. The spatial distribution of earthquake stress rotations following large subduction zone earthquakes

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2017-01-01

    Rotations of the principal stress axes due to great subduction zone earthquakes have been used to infer low differential stress and near-complete stress drop. The spatial distribution of coseismic and postseismic stress rotation as a function of depth and along-strike distance is explored for three recent M ≥ 8.8 subduction megathrust earthquakes. In the down-dip direction, the largest coseismic stress rotations are found just above the Moho depth of the overriding plate. This zone has been identified as hosting large patches of large slip in great earthquakes, based on the lack of high-frequency radiated energy. The large continuous slip patches may facilitate near-complete stress drop. There is seismological evidence for high fluid pressures in the subducted slab around the Moho depth of the overriding plate, suggesting low differential stress levels in this zone due to high fluid pressure, also facilitating stress rotations. The coseismic stress rotations have similar along-strike extent as the mainshock rupture. Postseismic stress rotations tend to occur in the same locations as the coseismic stress rotations, probably due to the very low remaining differential stress following the near-complete coseismic stress drop. The spatial complexity of the observed stress changes suggests that an analytical solution for finding the differential stress from the coseismic stress rotation may be overly simplistic, and that modeling of the full spatial distribution of the mainshock static stress changes is necessary.

  7. Retrospective validation of renewal-based, medium-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2013-10-01

    In this paper, some methods for scoring the performances of an earthquake forecasting probability model are applied retrospectively for different goals. The time-dependent occurrence probabilities of a renewal process are tested against earthquakes of Mw ≥ 5.3 recorded in Italy according to decades of the past century. An aim was to check the capability of the model to reproduce the data by which the model was calibrated. The scoring procedures used can be distinguished on the basis of the requirement (or absence) of a reference model and of probability thresholds. Overall, a rank-based score, information gain, gambling scores, indices used in binary predictions and their loss functions are considered. The definition of various probability thresholds as percentages of the hazard functions allows proposals of the values associated with the best forecasting performance as alarm level in procedures for seismic risk mitigation. Some improvements are then made to the input data concerning the completeness of the historical catalogue and the consistency of the composite seismogenic sources with the hypotheses of the probability model. Another purpose of this study was thus to obtain hints on what is the most influential factor and on the suitability of adopting the consequent changes of the data sets. This is achieved by repeating the estimation procedure of the occurrence probabilities and the retrospective validation of the forecasts obtained under the new assumptions. According to the rank-based score, the completeness appears to be the most influential factor, while there are no clear indications of the usefulness of the decomposition of some composite sources, although in some cases, it has led to improvements of the forecast.

  8. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  9. Leveraging geodetic data to reduce losses from earthquakes

    USGS Publications Warehouse

    Murray, Jessica R.; Roeloffs, Evelyn A.; Brooks, Benjamin A.; Langbein, John O.; Leith, William S.; Minson, Sarah E.; Svarc, Jerry L.; Thatcher, Wayne R.

    2018-04-23

    event response products and by expanded use of geodetic imaging data to assess fault rupture and source parameters.Uncertainties in the NSHM, and in regional earthquake models, are reduced by fully incorporating geodetic data into earthquake probability calculations.Geodetic networks and data are integrated into the operations and earthquake information products of the Advanced National Seismic System (ANSS).Earthquake early warnings are improved by more rapidly assessing ground displacement and the dynamic faulting process for the largest earthquakes using real-time geodetic data.Methodology for probabilistic earthquake forecasting is refined by including geodetic data when calculating evolving moment release during aftershock sequences and by better understanding the implications of transient deformation for earthquake likelihood.A geodesy program that encompasses a balanced mix of activities to sustain missioncritical capabilities, grows new competencies through the continuum of fundamental to applied research, and ensures sufficient resources for these endeavors provides a foundation by which the EHP can be a leader in the application of geodesy to earthquake science. With this in mind the following objectives provide a framework to guide EHP efforts:Fully utilize geodetic information to improve key products, such as the NSHM and EEW, and to address new ventures like the USGS Subduction Zone Science Plan.Expand the variety, accuracy, and timeliness of post-earthquake information products, such as PAGER (Prompt Assessment of Global Earthquakes for Response), through incorporation of geodetic observations.Determine if geodetic measurements of transient deformation can significantly improve estimates of earthquake probability.Maintain an observational strategy aligned with the target outcomes of this document that includes continuous monitoring, recording of ephemeral observations, focused data collection for use in research, and application-driven data processing and

  10. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  11. The earthquake potential of the New Madrid seismic zone

    USGS Publications Warehouse

    Tuttle, Martitia P.; Schweig, Eugene S.; Sims, John D.; Lafferty, Robert H.; Wolf, Lorraine W.; Haynes, Marion L.

    2002-01-01

    The fault system responsible for New Madrid seismicity has generated temporally clustered very large earthquakes in A.D. 900 ± 100 years and A.D. 1450 ± 150 years as well as in 1811–1812. Given the uncertainties in dating liquefaction features, the time between the past three New Madrid events may be as short as 200 years and as long as 800 years, with an average of 500 years. This advance in understanding the Late Holocene history of the New Madrid seismic zone and thus, the contemporary tectonic behavior of the associated fault system was made through studies of hundreds of earthquake-induced liquefaction features at more than 250 sites across the New Madrid region. We have found evidence that prehistoric sand blows, like those that formed during the 1811–1812 earthquakes, are probably compound structures resulting from multiple earthquakes closely clustered in time or earthquake sequences. From the spatial distribution and size of sand blows and their sedimentary units, we infer the source zones and estimate the magnitudes of earthquakes within each sequence and thereby characterize the detailed behavior of the fault system. It appears that fault rupture was complex and that the central branch of the seismic zone produced very large earthquakes during the A.D. 900 and A.D. 1450 events as well as in 1811–1812. On the basis of a minimum recurrence rate of 200 years, we are now entering the period during which the next 1811–1812-type event could occur.

  12. Assessing the location and magnitude of the 20 October 1870 Charlevoix, Quebec, earthquake

    USGS Publications Warehouse

    Ebel, John E.; Dupuy, Megan; Bakun, William H.

    2013-01-01

    The Charlevoix, Quebec, earthquake of 20 October 1870 caused damage to several towns in Quebec and was felt throughout much of southeastern Canada and along the U.S. Atlantic seaboard from Maine to Maryland. Site‐specific damage and felt reports from Canadian and U.S. cities and towns were used in analyses of the location and magnitude of the earthquake. The macroseismic center of the earthquake was very close to Baie‐St‐Paul, where the greatest damage was reported, and the intensity magnitude MI was found to be 5.8, with a 95% probability range of 5.5–6.0. After corrections for epicentral‐distance differences are applied, the modified Mercalli intensity (MMI) data for the 1870 earthquake and for the moment magnitude M 6.2 Charlevoix earthquake of 1925 at common sites show that on average, the MMI readings are about 0.8 intensity units smaller for the 1870 earthquake than for the 1925 earthquake, suggesting that the 1870 earthquake was MI 5.7. A similar comparison of the MMI data for the 1870 earthquake with the corresponding data for the M 5.9 1988 Saguenay event suggests that the 1870 earthquake was MI 6.0. These analyses all suggest that the magnitude of the 1870 Charlevoix earthquake is between MI 5.5 and MI 6.0, with a best estimate of MI 5.8.

  13. The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, M.; Wang, X.; Dou, A.; Wu, X.

    2018-04-01

    The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.

  14. Earthquake Drill using the Earthquake Early Warning System at an Elementary School

    NASA Astrophysics Data System (ADS)

    Oki, Satoko; Yazaki, Yoshiaki; Koketsu, Kazuki

    2010-05-01

    Japan frequently suffers from many kinds of disasters such as earthquakes, typhoons, floods, volcanic eruptions, and landslides. On average, we lose about 120 people a year due to natural hazards in this decade. Above all, earthquakes are noteworthy, since it may kill thousands of people in a moment like in Kobe in 1995. People know that we may have "a big one" some day as long as we live on this land and that what to do; retrofit houses, appliance heavy furniture to walls, add latches to kitchen cabinets, and prepare emergency packs. Yet most of them do not take the action, and result in the loss of many lives. It is only the victims that learn something from the earthquake, and it has never become the lore of the nations. One of the most essential ways to reduce the damage is to educate the general public to be able to make the sound decision on what to do at the moment when an earthquake hits. This will require the knowledge of the backgrounds of the on-going phenomenon. The Ministry of Education, Culture, Sports, Science and Technology (MEXT), therefore, offered for public subscription to choose several model areas to adopt scientific education to the local elementary schools. This presentation is the report of a year and half courses that we had at the model elementary school in Tokyo Metropolitan Area. The tectonic setting of this area is very complicated; there are the Pacific and Philippine Sea plates subducting beneath the North America and the Eurasia plates. The subduction of the Philippine Sea plate causes mega-thrust earthquakes such as the 1923 Kanto earthquake (M 7.9) making 105,000 fatalities. A magnitude 7 or greater earthquake beneath this area is recently evaluated to occur with a probability of 70 % in 30 years. This is of immediate concern for the devastating loss of life and property because the Tokyo urban region now has a population of 42 million and is the center of approximately 40 % of the nation's activities, which may cause great global

  15. Earthquake Risk Mitigation in the Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Sakai, S.; Kasahara, K.; Nakagawa, S.; Nanjo, K.; Panayotopoulos, Y.; Tsuruoka, H.

    2010-12-01

    Seismic disaster risk mitigation in urban areas constitutes a challenge through collaboration of scientific, engineering, and social-science fields. Examples of collaborative efforts include research on detailed plate structure with identification of all significant faults, developing dense seismic networks; strong ground motion prediction, which uses information on near-surface seismic site effects and fault models; earthquake resistant and proof structures; and cross-discipline infrastructure for effective risk mitigation just after catastrophic events. Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (magnitude M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that the M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. This earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area (2007-2011) was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. The results that are obtained in the respective fields will be integrated until project termination to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area. In this talk, we give an outline of our project as an example of collaborative research on earthquake risk mitigation. Discussion is extended to our effort in progress and

  16. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  17. Urban seismic risk assessment: statistical repair cost data and probable structural losses based on damage scenario—correlation analysis

    NASA Astrophysics Data System (ADS)

    Eleftheriadou, Anastasia K.; Baltzopoulou, Aikaterini D.; Karabinis, Athanasios I.

    2016-06-01

    The current seismic risk assessment is based on two discrete approaches, actual and probable, validating afterwards the produced results. In the first part of this research, the seismic risk is evaluated from the available data regarding the mean statistical repair/strengthening or replacement cost for the total number of damaged structures (180,427 buildings) after the 7/9/1999 Parnitha (Athens) earthquake. The actual evaluated seismic risk is afterwards compared to the estimated probable structural losses, which is presented in the second part of the paper, based on a damage scenario in the referring earthquake. The applied damage scenario is based on recently developed damage probability matrices (DPMs) from Athens (Greece) damage database. The seismic risk estimation refers to 750,085 buildings situated in the extended urban region of Athens. The building exposure is categorized in five typical structural types and represents 18.80 % of the entire building stock in Greece. The last information is provided by the National Statistics Service of Greece (NSSG) according to the 2000-2001 census. The seismic input is characterized by the ratio, a g/ a o, where a g is the regional peak ground acceleration (PGA) which is evaluated from the earlier estimated research macroseismic intensities, and a o is the PGA according to the hazard map of the 2003 Greek Seismic Code. Finally, the collected investigated financial data derived from different National Services responsible for the post-earthquake crisis management concerning the repair/strengthening or replacement costs or other categories of costs for the rehabilitation of earthquake victims (construction and function of settlements for earthquake homeless, rent supports, demolitions, shorings) are used to determine the final total seismic risk factor.

  18. Predicted liquefaction in the greater Oakland area and northern Santa Clara Valley during a repeat of the 1868 Hayward Fault (M6.7-7.0) earthquake

    USGS Publications Warehouse

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.

    2010-01-01

    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by young Holocene levee deposits along major drainages where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906.

  19. Earthquakes of Loihi submarine volcano and the Hawaiian hot spot.

    USGS Publications Warehouse

    Klein, F.W.

    1982-01-01

    Loihi is an active submarine volcano located 35km S of the island of Hawaii and may eventually grow to be the next and S most island in the Hawaiian chain. The Hawaiian Volcano Observatory recorded two major earthquake swarms located there in 1971-1972 and 1975 which were probably associated with submarine eruptions or intrusions. The swarms were located very close to Loihi's bathymetric summit, except for earthquakes during the second stage of the 1971-1972 swarm, which occurred well onto Loihi's SW flank. The flank earthquakes appear to have been triggered by the preceding activity and possible rifting along Loihi's long axis, similar to the rift-flank relationship at Kilauea volcano. Other changes accompanied the shift in locations from Loihi's summit to its flank, including a shift from burst to continuous seismicity, a rise in maximum magnitude, a change from small earthquake clusters to a larger elongated zone, a drop in b value, and a presumed shift from concentrated volcanic stresses to a more diffuse tectonic stress on Loihi's flank. - Author

  20. Fluid‐driven seismicity response of the Rinconada fault near Paso Robles, California, to the 2003 M 6.5 San Simeon earthquake

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2012-01-01

    The 2003 M 6.5 San Simeon, California, earthquake caused significant damage in the city of Paso Robles and a persistent cluster of aftershocks close to Paso Robles near the Rinconada fault. Given the importance of secondary aftershock triggering in sequences of large events, a concern is whether this cluster of events could trigger another damaging earthquake near Paso Robles. An epidemic‐type aftershock sequence (ETAS) model is fit to the Rinconada seismicity, and multiple realizations indicate a 0.36% probability of at least one M≥6.0 earthquake during the next 30 years. However, this probability estimate is only as good as the projection into the future of the ETAS model. There is evidence that the seismicity may be influenced by fluid pressure changes, which cannot be forecasted using ETAS. The strongest evidence for fluids is the delay between the San Simeon mainshock and a high rate of seismicity in mid to late 2004. This delay can be explained as having been caused by a pore pressure decrease due to an undrained response to the coseismic dilatation, followed by increased pore pressure during the return to equilibrium. Seismicity migration along the fault also suggests fluid involvement, although the migration is too slow to be consistent with pore pressure diffusion. All other evidence, including focal mechanisms and b‐value, is consistent with tectonic earthquakes. This suggests a model where the role of fluid pressure changes is limited to the first seven months, while the fluid pressure equilibrates. The ETAS modeling adequately fits the events after July 2004 when the pore pressure stabilizes. The ETAS models imply that while the probability of a damaging earthquake on the Rinconada fault has approximately doubled due to the San Simeon earthquake, the absolute probability remains low.

  1. Stress Interactions and Transfer Between the Pawnee M5.8 Earthquake and Surrounding Faults in Oklahoma

    NASA Astrophysics Data System (ADS)

    Ghouse, N.; Hu, J.; Chang, J. C.

    2016-12-01

    The Pawnee M5.8 event is the largest earthquake in Oklahoma since instrumented history. How this earthquake affects known seismogenic areas in the state is a key issue for seismic hazard probability studies. In this study, we quantify stress loading and unloading on seismicity-delineated faults from the Oklahoma Geological Survey relocated-earthquake catalog. Our modeling indicates that areas in Noble, Pawnee, and Payne county are more prone to triggered seismicity, while areas in Alfalfa, Grant, Garfield, Logan, Major, Oklahoma, and Woods county are less prone to seismic triggering.

  2. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  3. Ionospheric Anomalies on the day of the Devastating Earthquakes during 2000-2012

    NASA Astrophysics Data System (ADS)

    Su, Fanfan; Zhou, Yiyan; Zhu, Fuying

    2013-04-01

    The study of the ionospheric abnormal changes during the large earthquakes has attracted much attention for many years. Many papers have reported the deviations of Total Electron Content (TEC) around the epicenter. The statistical analysis concludes that the anomalous behavior of TEC is related with the earthquakes with high probability[1]. But the special cases have different features[2][3]. In this study, we carry out a new statistical analysis to investigate the nature of the ionospheric anomalies during the devastating earthquakes. To demonstrate the abnormal changes of the ionospheric TEC, we have examined the TEC database from the Global Ionosphere Map (GIM). The GIM ( ftp://cddisa.gsfc.nasa.gov/pub/gps/products/ionex) includes about 200 of worldwide ground-based receivers of the GPS. The TEC data with resolution of 5° longitude and 2.5° latitude are routinely published in a 2-h time interval. The information of earthquakes is obtained from the USGS ( http://earthquake.usgs.gov/earthquakes/eqarchives/epic/). To avoid the interference of the magnetic storm, the days with Dst≤-20 nT are excluded. Finally, a total of 13 M≥8.0 earthquakes in the global area during 2000-2012 are selected. The 27 days before the main shock are treated as the background days. Here, 27-day TEC median (Me) and the standard deviation (σ) are used to detect the variation of TEC. We set the upper bound BU = Me + 3*σ, and the lower bound BL = Me - 3*σ. Therefore the probability of a new TEC in the interval (BL, BU) is approximately 99.7%. If TEC varies between BU and BL, the deviation (DTEC) equals zero. Otherwise, the deviations between TEC and bounds are calculated as DTEC = BU/BL - TEC. From the deviations, the positive and negative abnormal changes of TEC can be evaluated. We investigate temporal and spatial signatures of the ionospheric anomalies on the day of the devastating earthquakes(M≥8.0). The results show that the occurrence rates of positive anomaly and negative

  4. Source parameters of microearthquakes on an interplate asperity off Kamaishi, NE Japan over two earthquake cycles

    USGS Publications Warehouse

    Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira

    2012-01-01

    We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change

  5. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    for mega-earthquakes of M9.0+. The monitoring at regional scales may require application of a recently proposed scheme for the spatial stabilization of the intermediate-term middle-range predictions. The scheme guarantees a more objective and reliable diagnosis of times of increased probability and is less restrictive to input seismic data. It makes feasible reestablishment of seismic monitoring aimed at prediction of large magnitude earthquakes in Caucasus and Central Asia, which to our regret, has been discontinued in 1991. The first results of the monitoring (1986-1990) were encouraging, at least for M6.5+.

  6. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  7. Response and recovery lessons from the 2010-2011 earthquake sequence in Canterbury, New Zealand

    USGS Publications Warehouse

    Pierepiekarz, Mark; Johnston, David; Berryman, Kelvin; Hare, John; Gomberg, Joan S.; Williams, Robert A.; Weaver, Craig S.

    2014-01-01

    The impacts and opportunities that result when low-probability moderate earthquakes strike an urban area similar to many throughout the US were vividly conveyed in a one-day workshop in which social and Earth scientists, public officials, engineers, and an emergency manager shared their experiences of the earthquake sequence that struck the city of Christchurch and surrounding Canterbury region of New Zealand in 2010-2011. Without question, the earthquake sequence has had unprecedented impacts in all spheres on New Zealand society, locally to nationally--10% of the country's population was directly impacted and losses total 8-10% of their GDP. The following paragraphs present a few lessons from Christchurch.

  8. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  9. 8 January 2013 Mw=5.7 North Aegean Sea Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Kürçer, Akın; Yalçın, Hilal; Gülen, Levent; Kalafat, Doǧan

    2014-05-01

    The deformation of the North Aegean Sea is mainly controlled by the westernmost segments of North Anatolian Fault Zone (NAFZ). On January 8, 2013, a moderate earthquake (Mw= 5.7) occurred in the North Aegean Sea, which may be considered to be a part of westernmost splay of the NAFZ. A series of aftershocks were occurred within four months following the mainschock, which have magnitudes varying from 1.9 to 5.0. In this study, a total of 23 earthquake moment tensor solutions that belong to the 2013 earthquake sequence have been obtained by using KOERI and AFAD seismic data. The most widely used Gephart & Forsyth (1984) and Michael (1987) methods have been used to carry out stress tensor inversions. Based on the earthquake moment tensor solutions, distribution of epicenters and seismotectonic setting, the source of this earthquake sequence is a N75°E trending pure dextral strike-slip fault. The temporal and spatial distribution of earthquakes indicate that the rupture unilaterally propagated from SW to NE. The length of the fault has been calculated as approximately 12 km. using the afterschock distribution and empirical equations, suggested by Wells and Coppersmith (1994). The stress tensor analysis indicate that the dominant faulting type in the region is strike-slip and the direction of the regional compressive stress is WNW-ESE. The 1968 Aghios earthquake (Ms=7.3; Ambraseys and Jackson, 1998) and 2013 North Aegean Sea earthquake sequences clearly show that the regional stress has been transferred from SW to NE in this region. The last historical earthquake, the Bozcaada earthquake (M=7.05) had been occurred in the northeast of the 2013 earthquake sequence in 1672. The elapsed time (342 year) and regional stress transfer point out that the 1672 earthquake segment is probably a seismic gap. According to the empirical equations, the surface rupture length of the 1672 Earthquake segment was about 47 km, with a maximum displacement of 170 cm and average displacement

  10. Reconnaissance engineering geology of the Metlakatla area, Annette Island, Alaska, with emphasis on evaluation of earthquakes and other geologic hazards

    USGS Publications Warehouse

    Yehle, Lynn A.

    1977-01-01

    A program to study the engineering geology of most larger Alaska coastal communities and to evaluate their earthquake and other geologic hazards was started following the 1964 Alaska earthquake; this report about the Metlakatla area, Annette Island, is a product of that program. Field-study methods were of a reconnaissance nature, and thus the interpretations in the report are tentative. Landscape of the Metlakatla Peninsula, on which the city of Metlakatla is located, is characterized by a muskeg-covered terrane of very low relief. In contrast, most of the rest of Annette Island is composed of mountainous terrane with steep valleys and numerous lakes. During the Pleistocene Epoch the Metlakatla area was presumably covered by ice several times; glaciers smoothed the present Metlakatla Peninsula and deeply eroded valleys on the rest. of Annette Island. The last major deglaciation was completed probably before 10,000 years ago. Rebound of the earth's crust, believed to be related to glacial melting, has caused land emergence at Metlakatla of at least 50 ft (15 m) and probably more than 200 ft (61 m) relative to present sea level. Bedrock in the Metlakatla area is composed chiefly of hard metamorphic rocks: greenschist and greenstone with minor hornfels and schist. Strike and dip of beds are generally variable and minor offsets are common. Bedrock is of late Paleozoic to early Mesozoic age. Six types of surficial geologic materials of Quaternary age were recognized: firm diamicton, emerged shore, modern shore and delta, and alluvial deposits, very soft muskeg and other organic deposits, and firm to soft artificial fill. A combination map unit is composed of bedrock or diamicton. Geologic structure in southeastern Alaska is complex because, since at least early Paleozoic time, there have been several cycles of tectonic deformation that affected different parts of the region. Southeastern Alaska is transected by numerous faults and possible faults that attest to major

  11. Long-term persistence of subduction earthquake segment boundaries - evidence from Mejillones Peninsula, N-Chile

    NASA Astrophysics Data System (ADS)

    Victor, P.; Sobiesiak, M.; Nielsen, S.; Glodny, J.; Oncken, O.

    2010-12-01

    The Mejillones Peninsula in N-Chile is a strong anomaly in coastline morphology along the Chilean convergent margin. The location of the Peninsula coincides with the northern limit of the 1995 Mw=8.0 Antofagasta earthquake and the southern limit of the 2007 Mw=7.8 Tocopilla earthquake and, probably, also with the southern limit of the 1877 Mw=8.5 Iquique earthquake. Although it is tempting to recognise the Mejillones Peninsula as the surface expression of a major segment boundary for large subduction earthquakes, so far evidence for its stability over multiple seismic cycles is lacking. We introduce a detailed analysis of the aftershock sequences in combination with new age data of the surface uplift evolution since the late Pliocene to test the hypothesis whether earthquake rupture propagation is limited at the latitude of Mejillones Peninsula since a longer time period. If the Peninsula really is linked to a persistent segment boundary, then the surface deformation of the Peninsula in fact holds the record about a deep-seated mechanism revealing the interaction between the subduction process and near-surface deformation. In our study we present new chronostratigraphic and structural data that allow reconstructing the evolution of the Peninsula at the surface and correlation of the latter with seismic cycle deformation on the interface. We investigated sets of paleo-strandlines preserved in beach ridges and uplifted cliffs to reconstruct the uplift history of the Peninsula. Our results show that the central graben area on the Peninsula started uplifting above sea level as an anticlinal hinge zone prior to 400 ky ago, most probably 790 ky ago. The resulting E-W trending hinge exactly overlies the limit between the rupture planes of the Antofagasta and Tocopilla earthquakes. By correlating the uplift data with the slip distribution of the Antofagasta and Tocopilla earthquakes, we demonstrate that deformation and uplift is focussed during the postseismic and

  12. An application of synthetic seismicity in earthquake statistics - The Middle America Trench

    NASA Technical Reports Server (NTRS)

    Ward, Steven N.

    1992-01-01

    The way in which seismicity calculations which are based on the concept of fault segmentation incorporate the physics of faulting through static dislocation theory can improve earthquake recurrence statistics and hone the probabilities of hazard is shown. For the Middle America Trench, the spread parameters of the best-fitting lognormal or Weibull distributions (about 0.75) are much larger than the 0.21 intrinsic spread proposed in the Nishenko Buland (1987) hypothesis. Stress interaction between fault segments disrupts time or slip predictability and causes earthquake recurrence to be far more aperiodic than has been suggested.

  13. Study on safety level of RC beam bridges under earthquake

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia

    2017-08-01

    This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.

  14. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  15. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  16. Fatality rates of the M w ~8.2, 1934, Bihar-Nepal earthquake and comparison with the April 2015 Gorkha earthquake

    NASA Astrophysics Data System (ADS)

    Sapkota, Soma Nath; Bollinger, Laurent; Perrier, Frédéric

    2016-03-01

    Large Himalayan earthquakes expose rapidly growing populations of millions of people to high levels of seismic hazards, in particular in northeast India and Nepal. Calibrating vulnerability models specific to this region of the world is therefore crucial to the development of reliable mitigation measures. Here, we reevaluate the >15,700 casualties (8500 in Nepal and 7200 in India) from the M w ~8.2, 1934, Bihar-Nepal earthquake and calculate the fatality rates for this earthquake using an estimation of the population derived from two census held in 1921 and 1942. Values reach 0.7-1 % in the epicentral region, located in eastern Nepal, and 2-5 % in the urban areas of the Kathmandu valley. Assuming a constant vulnerability, we obtain, if the same earthquake would have repeated in 2011, fatalities of 33,000 in Nepal and 50,000 in India. Fast-growing population in India indeed must unavoidably lead to increased levels of casualty compared with Nepal, where the population growth is smaller. Aside from that probably robust fact, extrapolations have to be taken with great caution. Among other effects, building and life vulnerability could depend on population concentration and evolution of construction methods. Indeed, fatalities of the April 25, 2015, M w 7.8 Gorkha earthquake indicated on average a reduction in building vulnerability in urban areas, while rural areas remained highly vulnerable. While effective scaling laws, function of the building stock, seem to describe these differences adequately, vulnerability in the case of an M w >8.2 earthquake remains largely unknown. Further research should be carried out urgently so that better prevention strategies can be implemented and building codes reevaluated on, adequately combining detailed ancient and modern data.

  17. Near-simultaneous great earthquakes at Tongan megathrust and outer rise in September 2009.

    PubMed

    Beavan, J; Wang, X; Holden, C; Wilson, K; Power, W; Prasetya, G; Bevis, M; Kautoke, R

    2010-08-19

    The Earth's largest earthquakes and tsunamis are usually caused by thrust-faulting earthquakes on the shallow part of the subduction interface between two tectonic plates, where stored elastic energy due to convergence between the plates is rapidly released. The tsunami that devastated the Samoan and northern Tongan islands on 29 September 2009 was preceded by a globally recorded magnitude-8 normal-faulting earthquake in the outer-rise region, where the Pacific plate bends before entering the subduction zone. Preliminary interpretation suggested that this earthquake was the source of the tsunami. Here we show that the outer-rise earthquake was accompanied by a nearly simultaneous rupture of the shallow subduction interface, equivalent to a magnitude-8 earthquake, that also contributed significantly to the tsunami. The subduction interface event was probably a slow earthquake with a rise time of several minutes that triggered the outer-rise event several minutes later. However, we cannot rule out the possibility that the normal fault ruptured first and dynamically triggered the subduction interface event. Our evidence comes from displacements of Global Positioning System stations and modelling of tsunami waves recorded by ocean-bottom pressure sensors, with support from seismic data and tsunami field observations. Evidence of the subduction earthquake in global seismic data is largely hidden because of the earthquake's slow rise time or because its ground motion is disguised by that of the normal-faulting event. Earthquake doublets where subduction interface events trigger large outer-rise earthquakes have been recorded previously, but this is the first well-documented example where the two events occur so closely in time and the triggering event might be a slow earthquake. As well as providing information on strain release mechanisms at subduction zones, earthquakes such as this provide a possible mechanism for the occasional large tsunamis generated at the Tonga

  18. Segmentation of the Himalayan megathrust around the Gorkha earthquake (25 April 2015) in Nepal

    NASA Astrophysics Data System (ADS)

    Mugnier, Jean-Louis; Jouanne, François; Bhattarai, Roshan; Cortes-Aranda, Joaquim; Gajurel, Ananta; Leturmy, Pascale; Robert, Xavier; Upreti, Bishal; Vassallo, Riccardo

    2017-06-01

    We put the 25 April 2015 earthquake of Nepal (Mw 7.9) into its structural geological context in order to specify the role of the segmentation of the Himalayan megathrust. The rupture is mainly located NW of Kathmandu, at a depth of 13-15 km on a flat portion of the Main Himalayan Thrust (MHT) that dips towards the N-NE by 7-10°. The northern bound of the main rupture corresponds to the transition towards a steeper crustal ramp. This ramp, which is partly coupled during the interseismic period, is only locally affected by the earthquake. The southern bound of the rupture was near the leading edge of the Lesser Himalaya antiformal duplex and near the frontal footwall ramp of the upper Nawakot duplex. The rupture has been affected by transversal structures: on the western side, the Judi lineament separates the main rupture zone from the nucleation area; on the eastern side, the Gaurishankar lineament separates the 25 April 2015 rupture from the 12 May 2015 (Mw 7.2) rupture. The origin of these lineaments is very complex: they are probably linked to pre-Himalayan faults that extend into the Indian shield beneath the MHT. These inherited faults induce transverse warping of the upper lithosphere beneath the MHT, control the location of lateral ramps of the thrust system and concentrate the hanging wall deformation at the lateral edge of the ruptures. The MHT is therefore segmented by stable barriers that define at least five patches in Central Nepal. These barriers influence the extent of the earthquake ruptures. For the last two centuries: the 1833 (Mw 7.6) earthquake was rather similar in extent to the 2015 event but its rupture propagated south-westwards from an epicentre located NE of Kathmandu; the patch south of Kathmandu was probably affected by at least three earthquakes of Mw ⩾ 7 that followed the 1833 event a few days later or 33 years (1866 event, Mw 7.2) later; the 1934 earthquake (Mw 8.4) had an epicentre ∼170 km east of Kathmandu, may have propagated

  19. Tokyo Metropolitan Earthquake Preparedness Project - A Progress Report

    NASA Astrophysics Data System (ADS)

    Hayashi, H.

    2010-12-01

    Munich Re once ranked that Tokyo metropolitan region, the capital of Japan, is the most vulnerable area for earthquake disasters, followed by San Francisco Bay Area, US and Osaka, Japan. Seismologists also predict that Tokyo metropolitan region may have at least one near-field earthquake with a probability of 70% for the next 30 years. Given this prediction, Japanese Government took it seriously to conduct damage estimations and revealed that, as the worst case scenario, if a7.3 magnitude earthquake under heavy winds as shown in the fig. 1, it would kill a total of 11,000 people and a total of direct and indirect losses would amount to 112,000,000,000,000 yen(1,300,000,000,000, 1=85yen) . In addition to mortality and financial losses, a total of 25 million people would be severely impacted by this earthquake in four prefectures. If this earthquake occurs, 300,000 elevators will be stopped suddenly, and 12,500 persons would be confined in them for a long time. Seven million people will come to use over 20,000 public shelters spread over the impacted area. Over one millions temporary housing units should be built to accommodate 4.6 million people who lost their dwellings. 2.5 million people will relocate to outside of the damaged area. In short, an unprecedented scale of earthquake disaster is expected and we must prepare for it. Even though disaster mitigation is undoubtedly the best solution, it is more realistic that the expected earthquake would hit before we complete this business. In other words, we must take into account another solution to make the people and the assets in this region more resilient for the Tokyo metropolitan earthquake. This is the question we have been tackling with for the last four years. To increase societal resilience for Tokyo metropolitan earthquake, we adopted a holistic approach to integrate both emergency response and long-term recovery. There are three goals for long-term recovery, which consists of Physical recovery, Economic

  20. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  1. Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.

    2014-12-01

    Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available

  2. A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.

    2014-12-01

    A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.

  3. Width of surface rupture zone for thrust earthquakes: implications for earthquake fault zoning

    NASA Astrophysics Data System (ADS)

    Boncio, Paolo; Liberi, Francesca; Caldarella, Martina; Nurminen, Fiia-Charlotta

    2018-01-01

    The criteria for zoning the surface fault rupture hazard (SFRH) along thrust faults are defined by analysing the characteristics of the areas of coseismic surface faulting in thrust earthquakes. Normal and strike-slip faults have been deeply studied by other authors concerning the SFRH, while thrust faults have not been studied with comparable attention. Surface faulting data were compiled for 11 well-studied historic thrust earthquakes occurred globally (5.4 ≤ M ≤ 7.9). Several different types of coseismic fault scarps characterize the analysed earthquakes, depending on the topography, fault geometry and near-surface materials (simple and hanging wall collapse scarps, pressure ridges, fold scarps and thrust or pressure ridges with bending-moment or flexural-slip fault ruptures due to large-scale folding). For all the earthquakes, the distance of distributed ruptures from the principal fault rupture (r) and the width of the rupture zone (WRZ) were compiled directly from the literature or measured systematically in GIS-georeferenced published maps. Overall, surface ruptures can occur up to large distances from the main fault ( ˜ 2150 m on the footwall and ˜ 3100 m on the hanging wall). Most of the ruptures occur on the hanging wall, preferentially in the vicinity of the principal fault trace ( > ˜ 50 % at distances < ˜ 250 m). The widest WRZ are recorded where sympathetic slip (Sy) on distant faults occurs, and/or where bending-moment (B-M) or flexural-slip (F-S) fault ruptures, associated with large-scale folds (hundreds of metres to kilometres in wavelength), are present. A positive relation between the earthquake magnitude and the total WRZ is evident, while a clear correlation between the vertical displacement on the principal fault and the total WRZ is not found. The distribution of surface ruptures is fitted with probability density functions, in order to define a criterion to remove outliers (e.g. 90 % probability of the cumulative distribution

  4. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    USGS Publications Warehouse

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as

  5. Compilation of Surface Creep on California Faults and Comparison of WGCEP 2007 Deformation Model to Pacific-North American Plate Motion

    USGS Publications Warehouse

    Wisely, Beth A.; Schmidt, David A.; Weldon, Ray J.

    2008-01-01

    This Appendix contains 3 sections that 1) documents published observations of surface creep on California faults, 2) constructs line integrals across the WG-07 deformation model to compare to the Pacific ? North America plate motion, and 3) constructs strain tensors of volumes across the WG-07 deformation model to compare to the Pacific ? North America plate motion. Observation of creep on faults is a critical part of our earthquake rupture model because if a fault is observed to creep the moment released as earthquakes is reduced from what would be inferred directly from the fault?s slip rate. There is considerable debate about how representative creep measured at the surface during a short time period is of the whole fault surface through the entire seismic cycle (e.g. Hudnut and Clark, 1989). Observationally, it is clear that the amount of creep varies spatially and temporally on a fault. However, from a practical point of view a single creep rate is associated with a fault section and the reduction in seismic moment generated by the fault is accommodated in seismic hazard models by reducing the surface area that generates earthquakes or by reducing the slip rate that is converted into seismic energy. WG-07 decided to follow the practice of past Working Groups and the National Seismic Hazard Map and used creep rate (where it was judged to be interseismic, see Table P1) to reduce the area of the fault surface that generates seismic events. In addition to following past practice, this decision allowed the Working Group to use a reduction of slip rate as a separate factor to accommodate aftershocks, post seismic slip, possible aseismic permanent deformation along fault zones and other processes that are inferred to affect the entire surface area of a fault, and thus are better modeled as a reduction in slip rate. C-zones are also handled by a reduction in slip rate, because they are inferred to include regions of widely distributed shear that is not completely

  6. Land-level changes from a late Holocene earthquake in the northern Puget lowland, Washington

    USGS Publications Warehouse

    Kelsey, H.M.; Sherrod, B.; Johnson, S.Y.; Dadisman, S.V.

    2004-01-01

    An earthquake, probably generated on the southern Whidbey Island fault zone, caused 1-2 m of ground-surface uplift on central Whidbey Island ???2800-3200 yr ago. The cause of the uplift is a fold that grew coseismically above a blind fault that was the earthquake source. Both the fault and the fold at the fault's tip are imaged on multichannel seismic refection profiles in Puget Sound immediately east of the central Whidbey Island site. Uplift is documented through contrasting histories of relative sea level at two coastal marshes on either side of the fault. Late Holocene shallow-crustal earthquakes of Mw = 6.5-7 pose substantial seismic hazard to the northern Puget Lowland. ?? 2004 Geological Society of America.

  7. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  8. Surface slip during large Owens Valley earthquakes

    USGS Publications Warehouse

    Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  9. Prospective earthquake forecasts at the Himalayan Front after the 25 April 2015 M 7.8 Gorkha Mainshock

    USGS Publications Warehouse

    Segou, Margaret; Parsons, Thomas E.

    2016-01-01

    When a major earthquake strikes, the resulting devastation can be compounded or even exceeded by the subsequent cascade of triggered seismicity. As the Nepalese recover from the 25 April 2015 shock, knowledge of what comes next is essential. We calculate the redistribution of crustal stresses and implied earthquake probabilities for different periods, from daily to 30 years into the future. An initial forecast was completed before an M 7.3 earthquake struck on 12 May 2015 that enables a preliminary assessment; postforecast seismicity has so far occurred within a zone of fivefold probability gain. Evaluation of the forecast performance, using two months of seismic data, reveals that stress‐based approaches present improved skill in higher‐magnitude triggered seismicity. Our results suggest that considering the total stress field, rather than only the coseismic one, improves the spatial performance of the model based on the estimation of a wide range of potential triggered faults following a mainshock.

  10. Late Holocene liquefaction features in the Dominican Republic: A powerful tool for earthquake hazard assessment in the northeastern Caribbean

    USGS Publications Warehouse

    Tuttle, M.P.; Prentice, C.S.; Dyer-Williams, K.; Pena, L.R.; Burr, G.

    2003-01-01

    Several generations of sand blows and sand dikes, indicative of significant and recurrent liquefaction, are preserved in the late Holocene alluvial deposits of the Cibao Valley in northern Dominican Republic. The Cibao Valley is structurally controlled by the Septentrional fault, an onshore section of the North American-Caribbean strike-slip plate boundary. The Septentrional fault was previously studied in the central part of the valley, where it sinistrally offsets Holocene terrace risers and soil horizons. In the eastern and western parts of the valley, the Septentrional fault is buried by Holocene alluvial deposits, making direct study of the structure difficult. Liquefaction features that formed in these Holocene deposits as a result of strong ground shaking provide a record of earthquakes in these areas. Liquefaction features in the eastern Cibao Valley indicate that at least one historic earthquake, probably the moment magnitude, M 8, 4 August 1946 event, and two to four prehistoric earthquakes of M 7 to 8 struck this area during the past 1100 yr. The prehistoric earthquakes appear to cluster in time and could have resulted from rupture of the central and eastern sections of the Septentrional fault circa A.D. 1200. Liquefaction features in the western Cibao Valley indicate that one historic earthquake, probably the M 8, 7 May 1842 event, and two prehistoric earthquakes of M 7-8 struck this area during the past 1600 yr. Our findings suggest that rupture of the Septentrional fault circa A.D. 1200 may have extended beyond the central Cibao Valley and generated an earthquake of M 8. Additional information regarding the age and size distribution of liquefaction features is needed to reconstruct the prehistoric earthquake history of Hispaniola and to define the long-term behavior and earthquake potential of faults associated with the North American-Caribbean plate boundary.

  11. Risk and return: evaluating Reverse Tracing of Precursors earthquake predictions

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2010-09-01

    In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

  12. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  13. Empirical estimation of the conditional probability of natech events within the United States.

    PubMed

    Santella, Nicholas; Steinberg, Laura J; Aguirra, Gloria Andrea

    2011-06-01

    Natural disasters are the cause of a sizeable number of hazmat releases, referred to as "natechs." An enhanced understanding of natech probability, allowing for predictions of natech occurrence, is an important step in determining how industry and government should mitigate natech risk. This study quantifies the conditional probabilities of natechs at TRI/RMP and SICS 1311 facilities given the occurrence of hurricanes, earthquakes, tornadoes, and floods. During hurricanes, a higher probability of releases was observed due to storm surge (7.3 releases per 100 TRI/RMP facilities exposed vs. 6.2 for SIC 1311) compared to category 1-2 hurricane winds (5.6 TRI, 2.6 SIC 1311). Logistic regression confirms the statistical significance of the greater propensity for releases at RMP/TRI facilities, and during some hurricanes, when controlling for hazard zone. The probability of natechs at TRI/RMP facilities during earthquakes increased from 0.1 releases per 100 facilities at MMI V to 21.4 at MMI IX. The probability of a natech at TRI/RMP facilities within 25 miles of a tornado was small (∼0.025 per 100 facilities), reflecting the limited area directly affected by tornadoes. Areas inundated during flood events had a probability of 1.1 releases per 100 facilities but demonstrated widely varying natech occurrence during individual events, indicating that factors not quantified in this study such as flood depth and speed are important for predicting flood natechs. These results can inform natech risk analysis, aid government agencies responsible for planning response and remediation after natural disasters, and should be useful in raising awareness of natech risk within industry. © 2011 Society for Risk Analysis.

  14. Characterization of intermittency in renewal processes: Application to earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  15. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  16. Global Review of Induced and Triggered Earthquakes

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Wilson, M.; Gluyas, J.; Julian, B. R.; Davies, R. J.

    2016-12-01

    Natural processes associated with very small incremental stress changes can modulate the spatial and temporal occurrence of earthquakes. These processes include tectonic stress changes, the migration of fluids in the crust, Earth tides, surface ice and snow loading, heavy rain, atmospheric pressure, sediment unloading and groundwater loss. It is thus unsurprising that large anthropogenic projects which may induce stress changes of a similar size also modulate seismicity. As human development accelerates and industrial projects become larger in scale and more numerous, the number of such cases is increasing. That mining and water-reservoir impoundment can induce earthquakes has been accepted for several decades. Now, concern is growing about earthquakes induced by activities such as hydraulic fracturing for shale-gas extraction and waste-water disposal via injection into boreholes. As hydrocarbon reservoirs enter their tertiary phases of production, seismicity may also increase there. The full extent of human activities thought to induce earthquakes is, however, much wider than generally appreciated. We have assembled as near complete a catalog as possible of cases of earthquakes postulated to have been induced by human activity. Our database contains a total of 705 cases and is probably the largest compilation made to date. We include all cases where reasonable arguments have been made for anthropogenic induction, even where these have been challenged in later publications. Our database presents the results of our search but leaves judgment about the merits of individual cases to the user. We divide anthropogenic earthquake-induction processes into: a) Surface operations, b) Extraction of mass from the subsurface, c) Introduction of mass into the subsurface, and d) Explosions. Each of these categories is divided into sub-categories. In some cases, categorization of a particular case is tentative because more than one anthropogenic activity may have preceded or been

  17. Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults.

    PubMed

    McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H

    2005-03-24

    East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.

  18. The ``exceptional'' earthquake of 3 January 1117 in the Verona area (northern Italy): A critical time review and detection of two lost earthquakes (lower Germany and Tuscany)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Comastri, Alberto; Boschi, Enzo

    2005-12-01

    In the seismological literature the 3 January 1117 earthquake represents an interesting case study, both for the sheer size of the area in which that event is recorded by the monastic sources of the 12th century, and for the amount of damage mentioned. The 1117 event has been added to the earthquake catalogues of up to five European countries (Italy, France, Belgium, Switzerland, the Iberian peninsula), and it is the largest historical earthquake for northern Italy. We have analyzed the monastic time system in the 12th century and, by means of a comparative analysis of the sources, have correlated the two shocks mentioned (in the night and in the afternoon of 3 January) to territorial effects, seeking to make the overall picture reported for Europe more consistent. The connection between the linguistic indications and the localization of the effects has allowed us to shed light, with a reasonable degree of approximation, upon two previously little known earthquakes, probably generated by a sequence of events. A first earthquake in lower Germany (I0 (epicentral intensity) VII-VIII MCS (Mercalli, Cancani, Sieberg), M 6.4) preceded the far more violent one in northern Italy (Verona area) by about 12-13 hours. The second event is the one reported in the literature. We have put forward new parameters for this Veronese earthquake (I0 IX MCS, M 7.0). A third earthquake is independently recorded in the northwestern area of Tuscany (Imax VII-VIII MCS), but for the latter event the epicenter and magnitude cannot be evaluated.

  19. Steam explosions, earthquakes, and volcanic eruptions -- what's in Yellowstone's future?

    USGS Publications Warehouse

    Lowenstern, Jacob B.; Christiansen, Robert L.; Smith, Robert B.; Morgan, Lisa A.; Heasler, Henry

    2005-01-01

    Yellowstone, one of the world?s largest active volcanic systems, has produced several giant volcanic eruptions in the past few million years, as well as many smaller eruptions and steam explosions. Although no eruptions of lava or volcanic ash have occurred for many thousands of years, future eruptions are likely. In the next few hundred years, hazards will most probably be limited to ongoing geyser and hot-spring activity, occasional steam explosions, and moderate to large earthquakes. To better understand Yellowstone?s volcano and earthquake hazards and to help protect the public, the U.S. Geological Survey, the University of Utah, and Yellowstone National Park formed the Yellowstone Volcano Observatory, which continuously monitors activity in the region.

  20. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  1. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    NASA Astrophysics Data System (ADS)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  2. Impact of average household income and damage exposure on post-earthquake distress and functioning: A community study following the February 2011 Christchurch earthquake.

    PubMed

    Dorahy, Martin J; Rowlands, Amy; Renouf, Charlotte; Hanna, Donncha; Britt, Eileen; Carter, Janet D

    2015-08-01

    Post-traumatic stress, depression and anxiety symptoms are common outcomes following earthquakes, and may persist for months and years. This study systematically examined the impact of neighbourhood damage exposure and average household income on psychological distress and functioning in 600 residents of Christchurch, New Zealand, 4-6 months after the fatal February, 2011 earthquake. Participants were from highly affected and relatively unaffected suburbs in low, medium and high average household income areas. The assessment battery included the Acute Stress Disorder Scale, the depression module of the Patient Health Questionnaire (PHQ-9), and the Generalized Anxiety Disorder Scale (GAD-7), along with single item measures of substance use, earthquake damage and impact, and disruptions in daily life and relationship functioning. Controlling for age, gender and social isolation, participants from low income areas were more likely to meet diagnostic cut-offs for depression and anxiety, and have more severe anxiety symptoms. Higher probabilities of acute stress, depression and anxiety diagnoses were evident in affected versus unaffected areas, and those in affected areas had more severe acute stress, depression and anxiety symptoms. An interaction between income and earthquake effect was found for depression, with those from the low and medium income affected suburbs more depressed. Those from low income areas were more likely, post-earthquake, to start psychiatric medication and increase smoking. There was a uniform increase in alcohol use across participants. Those from the low income affected suburb had greater general and relationship disruption post-quake. Average household income and damage exposure made unique contributions to earthquake-related distress and dysfunction. © 2014 The British Psychological Society.

  3. Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Dong, Peiyu; Hu, Caibo; Shi, Yaolin

    2015-04-01

    lower than certain value. For locations where large earthquakes occurred during the 110 years, the initial stresses can be inverted if the strength is estimated and the tectonic loading is assumed constant. Therefore, although initial stress state is unknown, we can try to make estimate of a range of it. In this study, we estimated a reasonable range of initial stress, and then based on Coulomb-Mohr criterion to regenerate the earthquake sequence, starting from the Daofu earthquake of 1904. We calculated the stress field evolution of the sequence, considering both the tectonic loading and interaction between the earthquakes. Ultimately we got a sketch of the present stress. Of course, a single model with certain initial stress is just one possible model. Consequently the potential seismic hazards distribution based on a single model is not convincing. We made test on hundreds of possible initial stress state, all of them can produce the historical earthquake sequence occurred, and summarized all kinds of calculated probabilities of the future seismic activity. Although we cannot provide the exact state in the future, but we can narrow the estimate of regions where is in high probability of risk. Our primary results indicate that the Xianshuihe fault and adjacent area is one of such zones with higher risk than other regions in the future. During 2014, there were 6 earthquakes (M > 5.0) happened in this region, which correspond with our result in some degree. We emphasized the importance of the initial stress field for the earthquake sequence, and provided a probabilistic assessment for future seismic hazards. This study may bring some new insights to estimate the initial stress, earthquake triggering, and the stress field evolution .

  4. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictable earthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

  5. Anomalous fluctuations of vertical velocity of Earth and their possible implications for earthquakes.

    PubMed

    Manshour, Pouya; Ghasemi, Fatemeh; Matsumoto, T; Gómez, J; Sahimi, Muhammad; Peinke, J; Pacheco, A F; Tabar, M Reza Rahimi

    2010-09-01

    High-quality measurements of seismic activities around the world provide a wealth of data and information that are relevant to understanding of when earthquakes may occur. If viewed as complex stochastic time series, such data may be analyzed by methods that provide deeper insights into their nature, hence leading to better understanding of the data and their possible implications for earthquakes. In this paper, we provide further evidence for our recent proposal [P. Mansour, Phys. Rev. Lett. 102, 014101 (2009)10.1103/PhysRevLett.102.014101] for the existence of a transition in the shape of the probability density function (PDF) of the successive detrended increments of the stochastic fluctuations of Earth's vertical velocity V_{z} , collected by broadband stations before moderate and large earthquakes. To demonstrate the transition, we carried out extensive analysis of the data for V_{z} for 12 earthquakes in several regions around the world, including the recent catasrophic one in Haiti. The analysis supports the hypothesis that before and near the time of an earthquake, the shape of the PDF undergoes significant and discernable changes, which can be characterized quantitatively. The typical time over which the PDF undergoes the transition is about 5-10 h prior to a moderate or large earthquake.

  6. The Quanzhou large earthquake: environment impact and deep process

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Gao*, R.; Ye, Z.; Wang, C.

    2017-12-01

    The Quanzhou earthquake is the largest earthquake in China's southeast coast in history. The ancient city of Quanzhou and its adjacent areas suffered serious damage. Analysis of the impact of Quanzhou earthquake on human activities, ecological environment and social development will provide an example for the research on environment and human interaction.According to historical records, on the night of December 29, 1604, a Ms 8.0 earthquake occurred in the sea area at the east of Quanzhou (25.0°N, 119.5°E) with a focal depth of 25 kilometers. It affected to a maximum distance of 220 kilometers from the epicenter and caused serious damage. Quanzhou, which has been known as one of the world's largest trade ports during Song and Yuan periods was heavily destroyed by this earthquake. The destruction of the ancient city was very serious and widespread. The city wall collapsed in Putian, Nanan, Tongan and other places. The East and West Towers of Kaiyuan Temple, which are famous with magnificent architecture in history, were seriously destroyed.Therefore, an enormous earthquake can exert devastating effects on human activities and social development in the history. It is estimated that a more than Ms. 5.0 earthquake in the economically developed coastal areas in China can directly cause economic losses for more than one hundred million yuan. This devastating large earthquake that severely destroyed the Quanzhou city was triggered under a tectonic-extensional circumstance. In this coastal area of the Fujian Province, the crust gradually thins eastward from inland to coast (less than 29 km thick crust beneath the coast), the lithosphere is also rather thin (60 70 km), and the Poisson's ratio of the crust here appears relatively high. The historical Quanzhou Earthquake was probably correlated with the NE-striking Littoral Fault Zone, which is characterized by right-lateral slip and exhibiting the most active seismicity in the coastal area of Fujian. Meanwhile, tectonic

  7. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2013-04-01

    demonstrated and sufficient justification of hazard assessment protocols; (b) a more complete learning of the actual range of earthquake hazards to local communities and populations, and (c) a more ethically responsible control over how seismic hazard and seismic risk is implemented to protect public safety. It follows that the international project GEM is on the wrong track, if it continues to base seismic risk estimates on the standard method to assess seismic hazard. The situation is not hopeless and could be improved dramatically due to available geological, geomorphologic, seismic, and tectonic evidences and data combined with deterministic pattern recognition methodologies, specifically, when intending to PREDICT PREDICTABLE, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades. It proves that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such info in advance extreme catastrophes, which are LOW PROBABILITY EVENTS THAT HAPPEN WITH CERTAINTY. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve to optimistic challenging issues of neo-deterministic Hazard Predictability.

  8. The seismicity in the L'Aquila area (Italy) with particular regard to 1985 earthquake

    NASA Astrophysics Data System (ADS)

    Bernardi, Fabrizio; Grazia Ciaccio, Maria; Palombo, Barbara

    2010-05-01

    We study moderate-magnitude earthquakes (Ml ≥3.5) occurred in the Aquila region recorded by the Istituto Nazionale di Geofisica e Vulcanologia from 1981 to 2009 (CSI, Castello et al., 2006 - http://www.ingv.it/CSI/ ; and ISIDe, http://iside.rm.ingv.it/iside/standard/index.jsp) as well as local temporary seismic networks We identify three major sequences (1985, 1994, 1996) occurring before the 6.th April 2009 Mw=6.3 earthquake. The 1985 earthquake (Ml=4.2) is the larger earthquake occurred in the investigated region till April 2009. The 1994 (Ml=3.9) and 1996 (Ml=4.1) occurred in the Campotosto area (NE to L'Aquila). We computed the source moment tensor using surface waves (Giardini et al., 1993) for the main shocks of the 1985 (Mw=4.7) and 1996 (Mw=4.4) sequences. The solutions show normal fault ruptures. We do not find a reliable solution for the major 1994 sequence earthquake. This suggests, that the magnitude of this event is probably below Mw≈4.2, which is the minimum magnitude threshold for this method.

  9. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    NASA Astrophysics Data System (ADS)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  10. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  11. Redefining Earthquakes and the Earthquake Machine

    ERIC Educational Resources Information Center

    Hubenthal, Michael; Braile, Larry; Taber, John

    2008-01-01

    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  12. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  13. Probabilistic safety analysis of earth retaining structures during earthquakes

    NASA Astrophysics Data System (ADS)

    Grivas, D. A.; Souflis, C.

    1982-07-01

    A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.

  14. Upper and lower plate controls on the great 2011 Tohoku-oki earthquake

    PubMed Central

    2018-01-01

    The great 2011 Tohoku-oki earthquake [moment magnitude (Mw) 9.0)] is the best-documented megathrust earthquake in the world, but its causal mechanism is still in controversy because of the poor state of knowledge on the nature of the megathrust zone. We constrain the structure of the Tohoku forearc using seismic tomography, residual topography, and gravity data, which reveal a close relationship between structural heterogeneities in and around the megathrust zone and rupture processes of the 2011 Tohoku-oki earthquake. Its mainshock nucleated in an area with high seismic velocity, low seismic attenuation, and strong seismic coupling, probably indicating a large asperity (or a cluster of asperities) in the megathrust zone. Strong coseismic high-frequency radiations also occurred in high-velocity patches, whereas large afterslips took plate in low-velocity areas, differences that may reflect changes in fault friction and lithological variations. These structural heterogeneities in and around the Tohoku megathrust originate from both the overriding and subducting plates, which controlled the nucleation and rupture processes of the 2011 Tohoku-oki earthquake.

  15. Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption

    NASA Astrophysics Data System (ADS)

    Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.

    2005-12-01

    Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.

  16. Earthquake probabilities for the Wassatch front region in Utah, Idaho, and Wyoming

    USGS Publications Warehouse

    Wong, Ivan G.; Lund, William R.; Duross, Christopher; Thomas, Patricia; Arabasz, Walter; Crone, Anthony J.; Hylland, Michael D.; Luco, Nicolas; Olig, Susan S.; Pechmann, James; Personius, Stephen; Petersen, Mark D.; Schwartz, David P.; Smith, Robert B.; Rowman, Steve

    2016-01-01

    In a letter to The Salt Lake Daily Tribune in September 1883, U.S. Geological Survey (USGS) geologist G.K. Gilbert warned local residents about the implications of observable fault scarps along the western base of the Wasatch Range. The scarps were evidence that large surface-rupturing earthquakes had occurred in the past and more would likely occur in the future. The main actor in this drama is the 350-km-long Wasatch fault zone (WFZ), which extends from central Utah to southernmost Idaho. The modern Wasatch Front urban corridor, which follows the valleys on the WFZ’s hanging wall between Brigham City and Nephi, is home to nearly 80% of Utah’s population of 3 million. Adding to this circumstance of “lots of eggs in one basket,” more than 75% of Utah’s economy is concentrated along the Wasatch Front in Utah’s four largest counties, literally astride the five central and most active segments of the WFZ.

  17. Repeating Marmara Sea earthquakes: indication for fault creep

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Wollin, Christopher; Domigall, Dorina; Küperkoch, Ludger; Martínez-Garzón, Patricia; Kwiatek, Grzegorz; Dresen, Georg; Malin, Peter E.

    2017-07-01

    Discriminating between a creeping and a locked status of active faults is of central relevance to characterize potential rupture scenarios of future earthquakes and the associated seismic hazard for nearby population centres. In this respect, highly similar earthquakes that repeatedly activate the same patch of an active fault portion are an important diagnostic tool to identify and possibly even quantify the amount of fault creep. Here, we present a refined hypocentre catalogue for the Marmara region in northwestern Turkey, where a magnitude M up to 7.4 earthquake is expected in the near future. Based on waveform cross-correlation for selected spatial seismicity clusters, we identify two magnitude M ∼ 2.8 repeater pairs. These repeaters were identified as being indicative of fault creep based on the selection criteria applied to the waveforms. They are located below the western part of the Marmara section of the North Anatolian Fault Zone and are the largest reported repeaters for the larger Marmara region. While the eastern portion of the Marmara seismic gap has been identified to be locked, only sparse information on the deformation status has been reported for its western part. Our findings indicate that the western Marmara section deforms aseismically to a substantial extent, which reduces the probability for this region to host a nucleation point for the pending Marmara earthquake. This is of relevance, since a nucleation of the Marmara event in the west and subsequent eastward rupture propagation towards the Istanbul metropolitan region would result in a substantially higher seismic hazard and resulting risk than if the earthquake would nucleate in the east and thus propagate westward away from the population centre Istanbul.

  18. Role of H2O in Generating Subduction Zone Earthquakes

    NASA Astrophysics Data System (ADS)

    Hasegawa, A.

    2017-03-01

    A dense nationwide seismic network and high seismic activity in Japan have provided a large volume of high-quality data, enabling high-resolution imaging of the seismic structures defining the Japanese subduction zones. Here, the role of H2O in generating earthquakes in subduction zones is discussed based mainly on recent seismic studies in Japan using these high-quality data. Locations of intermediate-depth intraslab earthquakes and seismic velocity and attenuation structures within the subducted slab provide evidence that strongly supports intermediate-depth intraslab earthquakes, although the details leading to the earthquake rupture are still poorly understood. Coseismic rotations of the principal stress axes observed after great megathrust earthquakes demonstrate that the plate interface is very weak, which is probably caused by overpressured fluids. Detailed tomographic imaging of the seismic velocity structure in and around plate boundary zones suggests that interplate coupling is affected by local fluid overpressure. Seismic tomography studies also show the presence of inclined sheet-like seismic low-velocity, high-attenuation zones in the mantle wedge. These may correspond to the upwelling flow portion of subduction-induced secondary convection in the mantle wedge. The upwelling flows reach the arc Moho directly beneath the volcanic areas, suggesting a direct relationship. H2O originally liberated from the subducted slab is transported by this upwelling flow to the arc crust. The H2O that reaches the crust is overpressured above hydrostatic values, weakening the surrounding crustal rocks and decreasing the shear strength of faults, thereby inducing shallow inland earthquakes. These observations suggest that H2O expelled from the subducting slab plays an important role in generating subduction zone earthquakes both within the subduction zone itself and within the magmatic arc occupying its hanging wall.

  19. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  20. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  1. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  2. Keeping the History in Historical Seismology: The 1872 Owens Valley, California Earthquake

    NASA Astrophysics Data System (ADS)

    Hough, Susan E.

    2008-07-01

    The importance of historical earthquakes is being increasingly recognized. Careful investigations of key pre-instrumental earthquakes can provide critical information and insights for not only seismic hazard assessment but also for earthquake science. In recent years, with the explosive growth in computational sophistication in Earth sciences, researchers have developed increasingly sophisticated methods to analyze macroseismic data quantitatively. These methodological developments can be extremely useful to exploit fully the temporally and spatially rich information source that seismic intensities often represent. For example, the exhaustive and painstaking investigations done by Ambraseys and his colleagues of early Himalayan earthquakes provides information that can be used to map out site response in the Ganges basin. In any investigation of macroseismic data, however, one must stay mindful that intensity values are not data but rather interpretations. The results of any subsequent analysis, regardless of the degree of sophistication of the methodology, will be only as reliable as the interpretations of available accounts—and only as complete as the research done to ferret out, and in many cases translate, these accounts. When intensities are assigned without an appreciation of historical setting and context, seemingly careful subsequent analysis can yield grossly inaccurate results. As a case study, I report here on the results of a recent investigation of the 1872 Owen's Valley, California earthquake. Careful consideration of macroseismic observations reveals that this event was probably larger than the great San Francisco earthquake of 1906, and possibly the largest historical earthquake in California. The results suggest that some large earthquakes in California will generate significantly larger ground motions than San Andreas fault events of comparable magnitude.

  3. Keeping the History in Historical Seismology: The 1872 Owens Valley, California Earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hough, Susan E.

    2008-07-08

    The importance of historical earthquakes is being increasingly recognized. Careful investigations of key pre-instrumental earthquakes can provide critical information and insights for not only seismic hazard assessment but also for earthquake science. In recent years, with the explosive growth in computational sophistication in Earth sciences, researchers have developed increasingly sophisticated methods to analyze macroseismic data quantitatively. These methodological developments can be extremely useful to exploit fully the temporally and spatially rich information source that seismic intensities often represent. For example, the exhaustive and painstaking investigations done by Ambraseys and his colleagues of early Himalayan earthquakes provides information that can bemore » used to map out site response in the Ganges basin. In any investigation of macroseismic data, however, one must stay mindful that intensity values are not data but rather interpretations. The results of any subsequent analysis, regardless of the degree of sophistication of the methodology, will be only as reliable as the interpretations of available accounts - and only as complete as the research done to ferret out, and in many cases translate, these accounts. When intensities are assigned without an appreciation of historical setting and context, seemingly careful subsequent analysis can yield grossly inaccurate results. As a case study, I report here on the results of a recent investigation of the 1872 Owen's Valley, California earthquake. Careful consideration of macroseismic observations reveals that this event was probably larger than the great San Francisco earthquake of 1906, and possibly the largest historical earthquake in California. The results suggest that some large earthquakes in California will generate significantly larger ground motions than San Andreas fault events of comparable magnitude.« less

  4. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Landslides

    USGS Publications Warehouse

    Keefer, David K.

    1998-01-01

    Central California, in the vicinity of San Francisco and Monterey Bays, has a history of fatal and damaging landslides, triggered by heavy rainfall, coastal and stream erosion, construction activity, and earthquakes. The great 1906 San Francisco earthquake (MS=8.2-8.3) generated more than 10,000 landslides throughout an area of 32,000 km2; these landslides killed at least 11 people and caused substantial damage to buildings, roads, railroads, and other civil works. Smaller numbers of landslides, which caused more localized damage, have also been reported from at least 20 other earthquakes that have occurred in the San Francisco Bay-Monterey Bay region since 1838. Conditions that make this region particularly susceptible to landslides include steep and rugged topography, weak rock and soil materials, seasonally heavy rainfall, and active seismicity. Given these conditions and history, it was no surprise that the 1989 Loma Prieta earthquake generated thousands of landslides throughout the region. Landslides caused one fatality and damaged at least 200 residences, numerous roads, and many other structures. Direct damage from landslides probably exceeded $30 million; additional, indirect economic losses were caused by long-term landslide blockage of two major highways and by delays in rebuilding brought about by concern over the potential long-term instability of some earthquake-damaged slopes.

  5. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  6. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  7. Detection of change points in underlying earthquake rates, with application to global mega-earthquakes

    NASA Astrophysics Data System (ADS)

    Touati, Sarah; Naylor, Mark; Main, Ian

    2016-02-01

    The recent spate of mega-earthquakes since 2004 has led to speculation of an underlying change in the global `background' rate of large events. At a regional scale, detecting changes in background rate is also an important practical problem for operational forecasting and risk calculation, for example due to volcanic processes, seismicity induced by fluid injection or withdrawal, or due to redistribution of Coulomb stress after natural large events. Here we examine the general problem of detecting changes in background rate in earthquake catalogues with and without correlated events, for the first time using the Bayes factor as a discriminant for models of varying complexity. First we use synthetic Poisson (purely random) and Epidemic-Type Aftershock Sequence (ETAS) models (which also allow for earthquake triggering) to test the effectiveness of many standard methods of addressing this question. These fall into two classes: those that evaluate the relative likelihood of different models, for example using Information Criteria or the Bayes Factor; and those that evaluate the probability of the observations (including extreme events or clusters of events) under a single null hypothesis, for example by applying the Kolmogorov-Smirnov and `runs' tests, and a variety of Z-score tests. The results demonstrate that the effectiveness among these tests varies widely. Information Criteria worked at least as well as the more computationally expensive Bayes factor method, and the Kolmogorov-Smirnov and runs tests proved to be the relatively ineffective in reliably detecting a change point. We then apply the methods tested to events at different thresholds above magnitude M ≥ 7 in the global earthquake catalogue since 1918, after first declustering the catalogue. This is most effectively done by removing likely correlated events using a much lower magnitude threshold (M ≥ 5), where triggering is much more obvious. We find no strong evidence that the background rate of large

  8. Complex rupture during the 12 January 2010 Haiti earthquake

    USGS Publications Warehouse

    Hayes, G.P.; Briggs, R.W.; Sladen, A.; Fielding, E.J.; Prentice, C.; Hudnut, K.; Mann, P.; Taylor, F.W.; Crone, A.J.; Gold, R.; Ito, T.; Simons, M.

    2010-01-01

    Initially, the devastating Mw 7.0, 12 January 2010 Haiti earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillog-Plantain Garden fault zone. Here, we combine seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process may have involved slip on multiple faults. Primary surface deformation was driven by rupture on blind thrust faults with only minor, deep, lateral slip along or near the main Enriquillog-Plantain Garden fault zone; thus the event only partially relieved centuries of accumulated left-lateral strain on a small part of the plate-boundary system. Together with the predominance of shallow off-fault thrusting, the lack of surface deformation implies that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the Enriquillog-Plantain Garden fault zone, as occurred in inferred Holocene and probable historic events. We suggest that the geological signature of this earthquakeg-broad warping and coastal deformation rather than surface rupture along the main fault zoneg-will not be easily recognized by standard palaeoseismic studies. We conclude that similarly complex earthquakes in tectonic environments that accommodate both translation and convergenceg-such as the San Andreas fault through the Transverse Ranges of Californiag-may be missing from the prehistoric earthquake record. ?? 2010 Macmillan Publishers Limited. All rights reserved.

  9. Determining the Probability that a Small Event in Brazil (magnitude 3.5 to 4.5 mb) will be Followed by a Larger Event

    NASA Astrophysics Data System (ADS)

    Assumpcao, M.

    2013-05-01

    A typical earthquake story in Brazil: A swarm of small earthquakes starts to occur near a small town, reaching magnitude 3.5, causing some alarm but no damage. The freightened population, not used to feeling earthquakes, calls the seismology experts who set up a local network to study the seismicity. To the usual and inevitable question "Are we going to have a larger earthquake?", the usual and standard answer "It is not possible to predict earthquakes; larger earthquakes are possible". Fearing unecessary panic, seismologists often add that "however, large earthquakes are not very likely". This vague answer has proven quite inadequate. "Not very likely" is interpreted by the population and authorities as "not going to happen, and there is not need to do anything". Before L'Aquila 2009, one case of magnitude 3.8 in Eastern Brazil was followed seven months later by a magnitude 4.9 causing serious damage to poorly built houses. One child died and the affected population felt deceived by the seismologists. In order to provide better answers than just a vague "not likely", we examined the Brazilian catalog of earthquakes for all cases of moderate magnitude (3.4 mb or larger) that were followed, up to one year later, by a larger event. We found that the chance of an event with magnitude 3.4 or larger being the foreshock of a larger magntitude is roughly 1/6. The probability of an event being a foreshock varies with magnitude from about 20% for a 3.5 mb to about 5% for a 4.5 mb. Also, given that an event in the range 3.4 to 4.3 is a foreshock, the probability that the mainshock will be 4.7 or larger is 1/6. The probability for a larger event to occur decreases with time after the occurrence of the possible foreshock with a time constant of ~70 days. Perhaps, by giving the population and civil defense a more quantitative answer (such as "the chance of a larger even is like rolling a six in a dice") may help the decision to reinforce poor houses or even evacuate people from

  10. The Ordered Network Structure and Prediction Summary for M≥7 Earthquakes in Xinjiang Region of China

    NASA Astrophysics Data System (ADS)

    Men, Ke-Pei; Zhao, Kai

    2014-12-01

    M ≥7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a × k (k = 1,2,3), 11 12 a, 41 43 a, 18 19 a, and 5 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019 - 2020 and 2025 - 2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  11. An Earthquake Rupture Forecast model for central Italy submitted to CSEP project

    NASA Astrophysics Data System (ADS)

    Pace, B.; Peruzza, L.

    2009-04-01

    We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5probability of occurrence of characteristic earthquakes by Brownian passage time distribution. Beside the original model, updated earthquake rupture forecasts only for individual sources are released too, in the light of recent analyses (Peruzza et al., 2008; Zoeller et al., 2008). We computed forecasts based on the LaSS-CI model for two time-windows: 5 and 10 years. Each model to be tested defines a forecasted earthquake rate in magnitude bins of 0.1 unit steps in the range M5-9, for the periods 1st April 2009 to 1st April 2014, and 1st April 2009 to 1st April 2019. B. Pace, L. Peruzza, G. Lavecchia, and P. Boncio (2006) Layered Seismogenic Source

  12. Modeling And Economics Of Extreme Subduction Earthquakes: Two Case Studies

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Emerson, D.; Perea, N.; Moulinec, C.

    2008-05-01

    The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K

  13. Late Holocene earthquakes on the Toe Jam Hill fault, Seattle fault zone, Bainbridge Island, Washington

    USGS Publications Warehouse

    Nelson, A.R.; Johnson, S.Y.; Kelsey, H.M.; Wells, R.E.; Sherrod, B.L.; Pezzopane, S.K.; Bradley, L.A.; Koehler, R. D.; Bucknam, R.C.

    2003-01-01

    Five trenches across a Holocene fault scarp yield the first radiocarbon-measured earthquake recurrence intervals for a crustal fault in western Washington. The scarp, the first to be revealed by laser imagery, marks the Toe Jam Hill fault, a north-dipping backthrust to the Seattle fault. Folded and faulted strata, liquefaction features, and forest soil A horizons buried by hanging-wall-collapse colluvium record three, or possibly four, earthquakes between 2500 and 1000 yr ago. The most recent earthquake is probably the 1050-1020 cal. (calibrated) yr B.P. (A.D. 900-930) earthquake that raised marine terraces and triggered a tsunami in Puget Sound. Vertical deformation estimated from stratigraphic and surface offsets at trench sites suggests late Holocene earthquake magnitudes near M7, corresponding to surface ruptures >36 km long. Deformation features recording poorly understood latest Pleistocene earthquakes suggest that they were smaller than late Holocene earthquakes. Postglacial earthquake recurrence intervals based on 97 radiocarbon ages, most on detrital charcoal, range from ???12,000 yr to as little as a century or less; corresponding fault-slip rates are 0.2 mm/yr for the past 16,000 yr and 2 mm/yr for the past 2500 yr. Because the Toe Jam Hill fault is a backthrust to the Seattle fault, it may not have ruptured during every earthquake on the Seattle fault. But the earthquake history of the Toe Jam Hill fault is at least a partial proxy for the history of the rest of the Seattle fault zone.

  14. Education for Earthquake Disaster Prevention in the Tokyo Metropolitan Area

    NASA Astrophysics Data System (ADS)

    Oki, S.; Tsuji, H.; Koketsu, K.; Yazaki, Y.

    2008-12-01

    Japan frequently suffers from all types of disasters such as earthquakes, typhoons, floods, volcanic eruptions, and landslides. In the first half of this year, we already had three big earthquakes and heavy rainfall, which killed more than 30 people. This is not just for Japan but Asia is the most disaster-afflicted region in the world, accounting for about 90% of all those affected by disasters, and more than 50% of the total fatalities and economic losses. One of the most essential ways to reduce the damage of natural disasters is to educate the general public to let them understand what is going on during those desasters. This leads individual to make the sound decision on what to do to prevent or reduce the damage. The Ministry of Education, Culture, Sports, Science and Technology (MEXT), therefore, offered for public subscription to choose several model areas to adopt scientific education to the local elementary schools, and ERI, the Earthquake Research Institute, is qualified to develop education for earthquake disaster prevention in the Tokyo metropolitan area. The tectonic setting of this area is very complicated; there are the Pacific and Philippine Sea plates subducting beneath the North America and the Eurasia plates. The subduction of the Philippine Sea plate causes mega-thrust earthquakes such as the 1703 Genroku earthquake (M 8.0) and the 1923 Kanto earthquake (M 7.9) which had 105,000 fatalities. A magnitude 7 or greater earthquake beneath this area is recently evaluated to occur with a probability of 70 % in 30 years. This is of immediate concern for the devastating loss of life and property because the Tokyo urban region now has a population of 42 million and is the center of approximately 40 % of the nation's activities, which may cause great global economic repercussion. To better understand earthquakes in this region, "Special Project for Earthquake Disaster Mitigation in Tokyo Metropolitan Area" has been conducted mainly by ERI. It is a 4-year

  15. Mega-thrust and Intra-slab Earthquakes Beneath Tokyo Metropolitan Area

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Sato, H.; Koketsu, K.; Hagiwara, H.; Wu, F.; Okaya, D.; Iwasaki, T.; Kasahara, K.

    2006-12-01

    In central Japan the Philippine Sea plate (PSP) subducts beneath the Tokyo Metropolitan area, the Kanto region, where it causes mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. The vertical proximity of this down going lithospheric plate is of concern because the greater Tokyo urban region has a population of 42 million and is the center of approximately 40% of the nation's economic activities. A M7+ earthquake in this region at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The M7+ earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In 2002, a consortium of universities and government agencies in Japan started the Special Project for Earthquake Disaster Mitigation in Urban Areas, a project to improve information needed for seismic hazards analyses of the largest urban centers. Assessment in Kanto of the seismic hazard produced by the Philippine Sea Plate (PSP) mega-thrust earthquakes requires identification of all significant faults and possible earthquake scenarios and rupture behavior, regional characterizations of PSP geometry and the overlying Honshu arc physical properties (e.g., seismic wave velocities, densities, attenuation), and local near-surface seism ic site effects. Our study addresses (1) improved regional characterization of the PSP geometry based on new deep seismic reflection profiles (Sato etal.,2005), reprocessed off-shore profiles (Kimura et al.,2005), and a dense seismic array in the Boso peninsular (Hagiwara et al., 2006) and (2) identification of asperities of the mega-thrust at the top of the PSP. We qualitatively examine the relationship between seismic reflections and asperities inferred by reflection physical properties. We also discuss the relation between deformation of PSP and intra-slab M7+ earthquakes: the

  16. Great earthquakes of variable magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Nelson, A.R.; Kelsey, H.M.; Witter, R.C.

    2006-01-01

    Comparison of histories of great earthquakes and accompanying tsunamis at eight coastal sites suggests plate-boundary ruptures of varying length, implying great earthquakes of variable magnitude at the Cascadia subduction zone. Inference of rupture length relies on degree of overlap on radiocarbon age ranges for earthquakes and tsunamis, and relative amounts of coseismic subsidence and heights of tsunamis. Written records of a tsunami in Japan provide the most conclusive evidence for rupture of much of the plate boundary during the earthquake of 26 January 1700. Cascadia stratigraphic evidence dating from about 1600??cal yr B.P., similar to that for the 1700 earthquake, implies a similarly long rupture with substantial subsidence and a high tsunami. Correlations are consistent with other long ruptures about 1350??cal yr B.P., 2500??cal yr B.P., 3400??cal yr B.P., 3800??cal yr B.P., 4400??cal yr B.P., and 4900??cal yr B.P. A rupture about 700-1100??cal yr B.P. was limited to the northern and central parts of the subduction zone, and a northern rupture about 2900??cal yr B.P. may have been similarly limited. Times of probable short ruptures in southern Cascadia include about 1100??cal yr B.P., 1700??cal yr B.P., 3200??cal yr B.P., 4200??cal yr B.P., 4600??cal yr B.P., and 4700??cal yr B.P. Rupture patterns suggest that the plate boundary in northern Cascadia usually breaks in long ruptures during the greatest earthquakes. Ruptures in southernmost Cascadia vary in length and recurrence intervals more than ruptures in northern Cascadia.

  17. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  18. Impact of earthquakes on sex ratio at birth: Eastern Marmara earthquakes

    PubMed Central

    Doğer, Emek; Çakıroğlu, Yiğit; Köpük, Şule Yıldırım; Ceylan, Yasin; Şimşek, Hayal Uzelli; Çalışkan, Eray

    2013-01-01

    Objective: Previous reports suggest that maternal exposure to acute stress related to earthquakes affects the sex ratio at birth. Our aim was to examine the change in sex ratio at birth after Eastern Marmara earthquake disasters. Material and Methods: This study was performed using the official birth statistics from January 1997 to December 2002 – before and after 17 August 1999, the date of the Golcuk Earthquake – supplied from the Turkey Statistics Institute. The secondary sex ratio was expressed as the male proportion at birth, and the ratio of both affected and unaffected areas were calculated and compared on a monthly basis using data from gender with using the Chi-square test. Results: We observed significant decreases in the secondary sex ratio in the 4th and 8th months following an earthquake in the affected region compared to the unaffected region (p= 0.001 and p= 0.024). In the earthquake region, the decrease observed in the secondary sex ratio during the 8th month after an earthquake was specific to the period after the earthquake. Conclusion: Our study indicated a significant reduction in the secondary sex ratio after an earthquake. With these findings, events that cause sudden intense stress such as earthquakes can have an effect on the sex ratio at birth. PMID:24592082

  19. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  20. Earthquake Hazard Assessment: Basics of Evaluation

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2016-04-01

    Seismic hazard assessment (SHA) is not an easy task that implies a delicate application of statistics to data of limited size and different accuracy. Earthquakes follow the Unified Scaling Law that generalizes the Gutenberg-Richter relationship by taking into account naturally fractal distribution of their sources. Moreover, earthquakes, including the great and mega events, are clustered in time and their sequences have irregular recurrence intervals. Furthermore, earthquake related observations are limited to the recent most decades (or centuries in just a few rare cases). Evidently, all this complicates reliable assessment of seismic hazard and associated risks. Making SHA claims, either termless or time dependent (so-called t-DASH), quantitatively probabilistic in the frames of the most popular objectivists' viewpoint on probability requires a long series of "yes/no" trials, which cannot be obtained without an extended rigorous testing of the method predictions against real observations. Therefore, we reiterate the necessity and possibility of applying the modified tools of Earthquake Prediction Strategies, in particular, the Error Diagram, introduced by G.M. Molchan in early 1990ies for evaluation of SHA, and the Seismic Roulette null-hypothesis as a measure of the alerted space. The set of errors, i.e. the rates of failure and of the alerted space-time volume, compared to those obtained in the same number of random guess trials permits evaluating the SHA method effectiveness and determining the optimal choice of the parameters in regard to specified cost-benefit functions. These and other information obtained in such a testing supplies us with a realistic estimate of confidence in SHA results and related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. These basics of SHA evaluation are exemplified in brief with a few examples, which analyses in more detail are given in a poster of

  1. Jumping over the hurdles to effectively communicate the Operational Earthquake Forecast

    NASA Astrophysics Data System (ADS)

    McBride, S.; Wein, A. M.; Becker, J.; Potter, S.; Tilley, E. N.; Gerstenberger, M.; Orchiston, C.; Johnston, D. M.

    2016-12-01

    Probabilities, uncertainties, statistics, science, and threats are notoriously difficult topics to communicate with members of the public. The Operational Earthquake Forecast (OEF) is designed to provide an understanding of potential numbers and sizes of earthquakes and the communication of it must address all of those challenges. Furthermore, there are other barriers to effective communication of the OEF. These barriers include the erosion of trust in scientists and experts, oversaturation of messages, fear and threat messages magnified by the sensalisation of the media, fractured media environments and online echo chambers. Given the complexities and challenges of the OEF, how can we overcome barriers to effective communication? Crisis and risk communication research can inform the development of communication strategies to increase the public understanding and use of the OEF, when applied to the opportunities and challenges of practice. We explore ongoing research regarding how the OEF can be more effectively communicated - including the channels, tools and message composition to engage with a variety of publics. We also draw on past experience and a study of OEF communication during the Canterbury Earthquake Sequence (CES). We demonstrate how research and experience has guided OEF communications during subsequent events in New Zealand, including the M5.7 Valentine's Day earthquake in 2016 (CES), M6.0 Wilberforce earthquake in 2015, and the Cook Strait/Lake Grassmere earthquakes in 2013. We identify the successes and lessons learned of the practical communication of the OEF. Finally, we present future projects and directions in the communication of OEF, informed by both practice and research.

  2. Risk Management in Earthquakes, Financial Markets, and the Game of 21: The role of Forecasting, Nowcasting, and Timecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.

    2017-12-01

    Earthquakes and financial markets share surprising similarities [1]. For example, the well-known VIX index, which by definition is the implied volatility of the Standard and Poors 500 index, behaves in very similar quantitative fashion to time series for earthquake rates. Both display sudden increases at the time of an earthquake or an announcement of the US Federal Reserve Open Market Committee [2], and both decay as an inverse power of time. Both can be regarded as examples of first order phase transitions [1], and display fractal and scaling behavior associated with critical transitions, such as power-law magnitude-frequency relations in the tails of the distributions. Early quantitative investors such as Edward Thorpe and John Kelly invented novel methods to mitigate or manage risk in games of chance such as blackjack, and in markets using hedging techniques that are still in widespread use today. The basic idea is the concept of proportional betting, where the gambler/investor bets a fraction of the bankroll whose size is determined by the "edge" or inside knowledge of the real (and changing) odds. For earthquake systems, the "edge" over nature can only exist in the form of a forecast (probability of a future earthquake); a nowcast (knowledge of the current state of an earthquake fault system); or a timecast (statistical estimate of the waiting time until the next major earthquake). In our terminology, a forecast is a model, while the nowcast and timecast are analysis methods using observed data only (no model). We also focus on defined geographic areas rather than on faults, thereby eliminating the need to consider specific fault data or fault interactions. Data used are online earthquake catalogs, generally since 1980. Forecasts are based on the Weibull (1952) probability law, and only a handful of parameters are needed. These methods allow the development of real time hazard and risk estimation using cloud-based technologies, and permit the application of

  3. The northeastern Ohio earthquake of 31 January 1986: was it induced? ( USA).

    USGS Publications Warehouse

    Nicholson, C.; Roeloffs, E.; Wesson, R.L.

    1988-01-01

    On 31 January 1986, at 11:46 EST, an earthquake of mb = 5.0 occurred about 40 km E of Cleveland, Ohio, and about 17 km S of the Perry Nuclear Power Plant. The earthquake was felt over a broad area, including 11 states, the District of Columbia, and parts of Ontario, Canada, caused intensity VI-VII at distances of 15 km, and generated relatively high accelerations (0.18 g) of short duration at the Perry plant. Thirteen aftershocks were detected as of 15 April, with 6 occurring within the first 8 days. Three deep waste disposal wells are currently operating within 15 km of the epicentral region and have been responsible for the injection of nearly 1.2 billion liters of fluid at pressures reaching 112 bars above ambient at a nominal depth of 1.8 km. The relative distance to the main shock epicenter and its aftershocks (about 12 km), the lack of large numbers of small earthquakes typical of many induced sequences, the history of small to moderate earthquakes in the region prior to the initiation of injection, and the attenuation of the pressure field with distance from the injection wells, however, all argue for a 'natural' origin for the 1986 earthquakes. In contrast, the proximity to failure conditions at the bottom of the well and the probable spatial association of at least one earthquake suggest that triggering by well activities cannot be precluded.- from Authors

  4. Discrimination between induced, triggered, and natural earthquakes close to hydrocarbon reservoirs: A probabilistic approach based on the modeling of depletion-induced stress changes and seismological source parameters

    NASA Astrophysics Data System (ADS)

    Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank

    2015-04-01

    Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6

  5. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  6. Surface rupture of the 1933 M 7.5 Diexi earthquake in eastern Tibet: implications for seismogenic tectonics

    NASA Astrophysics Data System (ADS)

    Ren, Junjie; Xu, Xiwei; Zhang, Shimin; Yeats, Robert S.; Chen, Jiawei; Zhu, Ailan; Liu, Shao

    2018-03-01

    The 1933 M 7.5 Diexi earthquake is another catastrophic event with the loss of over 10 000 lives in eastern Tibet comparable to the 2008 Mw 7.9 Wenchuan earthquake. Because of its unknown surface rupture, the seismogenic tectonics of the 1933 earthquake remains controversial. We collected unpublished reports, literatures and old photos associated with the 1933 earthquake and conducted field investigations based on high-resolution Google Earth imagery. Combined with palaeoseismological analysis, radiocarbon dating and relocated earthquakes, our results demonstrate that the source of the 1933 earthquake is the northwest-trending Songpinggou fault. This quake produced a > 30 km long normal-faulting surface rupture with the coseismic offset of 0.9-1.7 m. Its moment magnitude (Mw) is ˜6.8. The Songpinggou fault undergoes an average vertical slip rate of ˜0.25 mm yr-1 and has a recurrence interval of ˜6700 yr of large earthquakes. The normal-faulting surface rupture of this quake is probably the reactivation of the Mesozoic Jiaochang tectonic belt in gravitational adjustment of eastern Tibet. Besides the major boundary faults, minor structures within continental blocks may take a role in strain partitioning of eastern Tibet and have the potential of producing large earthquake. This study contributes to a full understanding of seismotectonics of large earthquakes and strain partitioning in eastern Tibet.

  7. The Earthquake Closet: Making Early-Warning Useful

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Trendafiloski, G.

    2009-12-01

    Early-warning of approaching strong shaking that could have fatal consequences is a research field that has made great progress. It makes it possible to reduce the impact on dangerous processes in critical facilities and on trains. However, its potential to save lives has a serious Achilles heel: The time for getting to safety is five to 10 seconds only, in many cities. Occupants of upper floors cannot get out of their buildings and narrow streets are not a safe place in strong earthquakes for people who might be able to exit. Thus, only about 10% of a city’s population can benefit from early-warnings, unless they have access to their own earthquake closet that is strong enough to remain intact in a collapsing building. Such an Earthquake Protection Unit (EPU) may be installed in the structurally strongest part of an existing apartment at low cost. In new constructions, we propose that an earthquake shelter be constructed for each floor, large enough to accommodate all occupants of that floor. These types of EPU should be constructed on top of each other, forming a strong tower, next to the elevator shaft and the staircase, at the center of the building. If an EPU with structural properties equivalent to an E-class building is placed into a building of B-class in South America, for example, we estimate that the chances of surviving shaking of intensity VII is about 30,000 times better inside the closet. The probability of escaping injury inside compared to outside we estimate as about 1,500 times better. Educating the population regarding the usefulness of EPUs will be essential, and P-waves can be used as the early warning signal. The owner of an earthquake closet can easily be motivated to take protective measures, when these involve simply to step into his closet, rather than attempting to exit from the building by running down many flights of stairs. Our intention is to start a discussion how best to construct EPUs and how to introduce legislation that will

  8. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.; Herman, Matthew W.; Barnhart, William D.; Furlong, Kevin P.; Riquelme, Sebástian; Benz, Harley M.; Bergman, Eric; Barrientos, Sergio; Earle, Paul S.; Samsonov, Sergey

    2014-01-01

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile which had not ruptured in a megathrust earthquake since a M ~8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March–April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  9. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake.

    PubMed

    Hayes, Gavin P; Herman, Matthew W; Barnhart, William D; Furlong, Kevin P; Riquelme, Sebástian; Benz, Harley M; Bergman, Eric; Barrientos, Sergio; Earle, Paul S; Samsonov, Sergey

    2014-08-21

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which had not ruptured in a megathrust earthquake since a M ∼8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March-April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  10. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, Susan E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  11. Do aftershock probabilities decay with time?

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    So, do aftershock probabilities decay with time? Consider a thought experiment in which we are at the time of the mainshock and ask how many aftershocks will occur a day, week, month, year, or even a century from now. First we must decide how large a window to use around each point in time. Let's assume that, as we go further into the future, we are asking a less precise question. Perhaps a day from now means 1 day 10% of a day, a week from now means 1 week 10% of a week, and so on. If we ignore c because it is a small fraction of a day (e.g., Reasenberg and Jones, 1989, hereafter RJ89), and set p = 1 because it is usually close to 1 (its value in the original Omori law), then the rate of earthquakes (K=t) decays at 1=t. If the length of the windows being considered increases proportionally to t, then the number of earthquakes at any time from now is the same because the rate decrease is canceled by the increase in the window duration. Under these conditions we should never think "It's a bit late for this to be an aftershock."

  12. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  13. Chapter F. The Loma Prieta, California, Earthquake of October 17, 1989 - Tectonic Processes and Models

    USGS Publications Warehouse

    Simpson, Robert W.

    1994-01-01

    If there is a single theme that unifies the diverse papers in this chapter, it is the attempt to understand the role of the Loma Prieta earthquake in the context of the earthquake 'machine' in northern California: as the latest event in a long history of shocks in the San Francisco Bay region, as an incremental contributor to the regional deformation pattern, and as a possible harbinger of future large earthquakes. One of the surprises generated by the earthquake was the rather large amount of uplift that occurred as a result of the reverse component of slip on the southwest-dipping fault plane. Preearthquake conventional wisdom had been that large earthquakes in the region would probably be caused by horizontal, right-lateral, strike-slip motion on vertical fault planes. In retrospect, the high topography of the Santa Cruz Mountains and the elevated marine terraces along the coast should have provided some clues. With the observed ocean retreat and the obvious uplift of the coast near Santa Cruz that accompanied the earthquake, Mother Nature was finally caught in the act. Several investigators quickly saw the connection between the earthquake uplift and the long-term evolution of the Santa Cruz Mountains and realized that important insights were to be gained by attempting to quantify the process of crustal deformation in terms of Loma Prieta-type increments of northward transport and fault-normal shortening.

  14. Earthquake focal mechanism forecasting in Italy for PSHA purposes

    NASA Astrophysics Data System (ADS)

    Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola

    2018-01-01

    In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.

  15. Unusual Childhood Waking as a Possible Precursor of the 1995 Kobe Earthquake

    PubMed Central

    Ikeya, Motoji; Whitehead, Neil E.

    2013-01-01

    Simple Summary The paper investigates whether young children may waken before earthquakes through a cause other than foreshocks. It concludes there is statistical evidence for this, but the mechanism best supported is anxiety produced by Ultra Low Frequency (ULF) electromagnetic waves. Abstract Nearly 1,100 young students living in Japan at a range of distances up to 500 km from the 1995 Kobe M7 earthquake were interviewed. A statistically significant abnormal rate of early wakening before the earthquake was found, having exponential decrease with distance and a half value approaching 100 km, but decreasing much slower than from a point source such as an epicentre; instead originating from an extended area of more than 100 km in diameter. Because an improbably high amount of variance is explained, this effect is unlikely to be simply psychological and must reflect another mechanism—perhaps Ultra-Low Frequency (ULF) electromagnetic waves creating anxiety—but probably not 222Rn excess. Other work reviewed suggests these conclusions may be valid for animals in general, not just children, but would be very difficult to apply for practical earthquake prediction. PMID:26487316

  16. Probabilistic Seismic Hazard Assessment for Himalayan-Tibetan Region from Historical and Instrumental Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui

    2018-02-01

    The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.

  17. Injection-induced earthquakes

    USGS Publications Warehouse

    Ellsworth, William L.

    2013-01-01

    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  18. Earthquakes; January-February 1982

    USGS Publications Warehouse

    Person, W.J.

    1982-01-01

    In the United States, a number of earthquakes occurred, but only minor damage was reported. Arkansas experienced a swarm of earthquakes beginning on January 12. Canada experienced one of its strongest earthquakes in a number of years on January 9; this earthquake caused slight damage in Maine. 

  19. New Field Observations About 19 August 1966 Varto earthquake, Eastern Turkey

    NASA Astrophysics Data System (ADS)

    Gurboga, S.

    2013-12-01

    Some destructive earthquakes in the past and even in the recent have several mysteries. For example, magnitude, epicenter location, faulting type and source fault of an earthquake have not been detected yet. One of these mysteries events is 19 August 1966 Varto earthquake in Turkey. 19 August 1966 Varto earthquake (Ms = 6.8) was an extra ordinary event at the 40 km east of junction between NAFS and EAFS which are two seismogenic system and active structures shaping the tectonics of Turkey. This earthquake sourced from Varto fault zone which are approximately 4 km width and 43 km length. It consists of faults which have parallel to sub-parallel, closely-spaced, north and south-dipping up to 85°-88° dip amount. Although this event has 6.8 (Ms) magnitude that is big enough to create a surface rupture, there was no clear surface deformation had been detected. This creates the controversial issue about the source fault and the mechanism of the earthquake. According to Wallace (1968) the type of faulting is right-lateral. On the other hand, McKenzie (1972) proposed right-lateral movement with thrust component by using the focal mechanism solution. The recent work done by Sançar et al. (2011) claimed that type of faulting is pure right-lateral strike-slip and there is no any surface rupture during the earthquake. Furthermore, they suggested that Varto segment in the Varto Fault Zone was most probably not broken in 1966 earthquake. This study is purely focused on the field geology and trenching survey for the investigation of 1966 Varto earthquake. Four fault segments have been mapped along the Varto fault zone: Varto, Sazlica, Leylekdağ and Çayçati segments. Because of the thick volcanic cover on the area around Varto, surface rupture has only been detected by trenching survey. Two trenching survey have been applied along the Yayikli and Ağaçalti faults in the Varto fault zone. Consequently, detailed geological work in the field and trenching survey indicate that

  20. People's perspectives and expectations on preparedness against earthquakes: Tehran case study.

    PubMed

    Jahangiri, Katayoun; Izadkhah, Yasamin Ostovar; Montazeri, Ali; Hosseinip, Mahmood

    2010-06-01

    Public education is one of the most important elements of earthquake preparedness. The present study identifies methods and appropriate strategies for public awareness and education on preparedness for earthquakes based on people's opinions in the city of Tehran. This was a cross-sectional study and a door-to-door survey of residents from 22 municipal districts in Tehran, the capital city of Iran. It involved a total of 1 211 individuals aged 15 and above. People were asked about different methods of public information and education, as well as the type of information needed for earthquake preparedness. "Enforcing the building contractors' compliance with the construction codes and regulations" was ranked as the first priority by 33.4% of the respondents. Over 70% of the participants (71.7%) regarded TV as the most appropriate means of media communication to prepare people for an earthquake. This was followed by "radio" which was selected by 51.6% of respondents. Slightly over 95% of the respondents believed that there would soon be an earthquake in the country, and 80% reported that they obtained this information from "the general public". Seventy percent of the study population felt that news of an earthquake should be communicated through the media. However, over fifty (58%) of the participants believed that governmental officials and agencies are best qualified to disseminate information about the risk of an imminent earthquake. Just over half (50.8%) of the respondents argued that the authorities do not usually provide enough information to people about earthquakes and the probability of their occurrence. Besides seismologists, respondents thought astronauts (32%), fortunetellers (32.3%), religious figures (34%), meteorologists (23%), and paleontologists (2%) can correctly predict the occurrence of an earthquake. Furthermore, 88.6% listed aid centers, mosques, newspapers and TV as the most important sources of information during the aftermath of an earthquake

  1. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2016-12-01

    Sequence of the 2016 Kumamoto earthquakes (Mw6.2 on April 14, Mw7.0 on April 16, and many aftershocks) caused a devastating damage at Kumamoto and Oita prefectures, Japan. During the Mw7.0 event, just after the direct S waves passing the central Oita, another M6 class event occurred there more than 80 km apart from the Mw7.0 event. The M6 event is interpreted as an induced earthquake; but it brought stronger shaking at the central Oita than that from the Mw7.0 event. We will discuss the induced earthquake from viewpoint of Earthquake Early Warning. In terms of ground shaking such as PGA and PGV, the Mw7.0 event is much smaller than those of the M6 induced earthquake at the central Oita (for example, 1/8 smaller at OIT009 station for PGA), and then it is easy to discriminate two events. However, PGD of the Mw7.0 is larger than that of the induced earthquake, and its appearance is just before the occurrence of the induced earthquake. It is quite difficult to recognize the induced earthquake from displacement waveforms only, because the displacement is strongly contaminated by that of the preceding Mw7.0 event. In many methods of EEW (including current JMA EEW system), magnitude is used for prediction of ground shaking through Ground Motion Prediction Equation (GMPE) and the magnitude is often estimated from displacement. However, displacement magnitude does not necessarily mean the best one for prediction of ground shaking, such as PGA and PGV. In case of the induced earthquake during the Kumamoto earthquake, displacement magnitude could not be estimated because of the strong contamination. Actually JMA EEW system could not recognize the induced earthquake. One of the important lessons we learned from eight years' operation of EEW is an issue of the multiple simultaneous earthquakes, such as aftershocks of the 2011 Mw9.0 Tohoku earthquake. Based on this lesson, we have proposed enhancement of real-time monitor of ground shaking itself instead of rapid estimation of

  2. Future of Earthquake Early Warning: Quantifying Uncertainty and Making Fast Automated Decisions for Applications

    NASA Astrophysics Data System (ADS)

    Wu, Stephen

    Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications. Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake. To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that

  3. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  4. Earthquakes, September-October 1986

    USGS Publications Warehouse

    Person, W.J.

    1987-01-01

    There was one great earthquake (8.0 and above) during this reporting period in the South Pacific in the Kermadec Islands. There were no major earthquakes (7.0-7.9) but earthquake-related deaths were reported in Greece and in El Salvador. There were no destrcutive earthquakes in the United States.

  5. Liquefaction Hazard Maps for Three Earthquake Scenarios for the Communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos, Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale, Northern Santa Clara County, California

    USGS Publications Warehouse

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.

    2008-01-01

    Maps showing the probability of surface manifestations of liquefaction in the northern Santa Clara Valley were prepared with liquefaction probability curves. The area includes the communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale. The probability curves were based on complementary cumulative frequency distributions of the liquefaction potential index (LPI) for surficial geologic units in the study area. LPI values were computed with extensive cone penetration test soundings. Maps were developed for three earthquake scenarios, an M7.8 on the San Andreas Fault comparable to the 1906 event, an M6.7 on the Hayward Fault comparable to the 1868 event, and an M6.9 on the Calaveras Fault. Ground motions were estimated with the Boore and Atkinson (2008) attenuation relation. Liquefaction is predicted for all three events in young Holocene levee deposits along the major creeks. Liquefaction probabilities are highest for the M7.8 earthquake, ranging from 0.33 to 0.37 if a 1.5-m deep water table is assumed, and 0.10 to 0.14 if a 5-m deep water table is assumed. Liquefaction probabilities of the other surficial geologic units are less than 0.05. Probabilities for the scenario earthquakes are generally consistent with observations during historical earthquakes.

  6. Post earthquake recovery in natural gas systems--1971 San Fernando Earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, W.T. Jr.

    1983-01-01

    In this paper a concise summary of the post earthquake investigations for the 1971 San Fernando Earthquake is presented. The effects of the earthquake upon building and other above ground structures are briefly discussed. Then the damages and subsequent repairs in the natural gas systems are reported.

  7. Earthquakes; July-August, 1978

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    Earthquake activity during this period was about normal. Deaths from earthquakes were reported from Greece and Guatemala. Three major earthquakes (magnitude 7.0-7.9) occurred in Taiwan, Chile, and Costa Rica. In the United States, the most significant earthquake was a magnitude 5.6 on August 13 in southern California. 

  8. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  9. Megathrust earthquakes in Central Chile: What is next after the Maule 2010 earthquake?

    NASA Astrophysics Data System (ADS)

    Madariaga, R.

    2013-05-01

    The 27 February 2010 Maule earthquake occurred in a well identified gap in the Chilean subduction zone. The event has now been studied in detail using both far-field, near field seismic and geodetic data, we will review this information gathered so far. The event broke a region that was much longer along strike than the gap left over from the 1835 Concepcion earthquake, sometimes called the Darwin earthquake because he was in the area when the earthquake occurred and made many observations. Recent studies of contemporary documents by Udias et al indicate that the area broken by the Maule earthquake in 2010 had previously broken by a similar earthquake in 1751, but several events in the magnitude 8 range occurred in the area principally in 1835 already mentioned and, more recently on 1 December 1928 to the North and on 21 May 1960 (1 1/2 days before the big Chilean earthquake of 1960). Currently the area of the 2010 earthquake and the region immediately to the North is undergoing a very large increase in seismicity with numerous clusters of seismicity that move along the plate interface. Examination of the seismicity of Chile of the 18th and 19th century show that the region immediately to the North of the 2010 earthquake broke in a very large megathrust event in July 1730. this is the largest known earthquake in central Chile. The region where this event occurred has broken in many occasions with M 8 range earthquakes in 1822, 1880, 1906, 1971 and 1985. Is it preparing for a new very large megathrust event? The 1906 earthquake of Mw 8.3 filled the central part of the gap but it has broken again on several occasions in 1971, 1973 and 1985. The main question is whether the 1906 earthquake relieved enough stresses from the 1730 rupture zone. Geodetic data shows that most of the region that broke in 1730 is currently almost fully locked from the northern end of the Maule earthquake at 34.5°S to 30°S, near the southern end of the of the Mw 8.5 Atacama earthquake of 11

  10. Evidence for New Madrid earthquakes in A.D. 300 and 2350 B.C

    USGS Publications Warehouse

    Tuttle, M.P.; Schweig, E. S.; Campbell, J.; Thomas, P.M.; Sims, J.D.; Lafferty, R. H.

    2005-01-01

    Six episodes of earthquake-induced liquefaction are associated with soil horizons containing artifacts of the Late Archaic (3000-500 B.C.) and Early to Middle Woodland (500 B.C.-A.D. 400) cultural periods at the Burkett archaeological site in the northern part of the New Madrid seismic zone, where little information about prehistoric earthquakes has been available. Radiocarbon dating of organic material and analysis of artifacts are used to estimate the ages of the liquefaction features and times of the causative earthquakes. The most recent episode of liquefaction occurred after A.D. 1670, produced small sand dikes, and is probably related to the 1895 Charleston, Missouri earthquake. The preceding episode struck the area in A.D. 300 ?? 200 years and generated a sand blow that contains Late Woodland artifacts and buries an Early to Middle Woodland cultural horizon. Four older episodes of liquefaction occurred in 2350 B.C. ?? 200 years and may have been produced by a sequence of closely timed earthquakes. The four earlier episodes produced graben structures, sand dikes, and associated sand blows on which a cultural mound was constructed. The Burkett liquefaction features that formed about 2350 B.C. and A.D. 300 are relatively large and similar in age to other liquefaction features in northeastern Arkansas and southeastern Missouri, respectively. If the prehistoric features at the Burkett site and those of similar age elsewhere in the region are the result of the same earthquakes, then this suggests that they were similar in size to the three largest (M 7-8) 1811-1812 New Madrid earthquakes. A New Madrid-type earthquake in A.D. 300 ?? 200 years would support an average recurrence time of 500 years. Although this study extends the earthquake chronology back to 2500 B.C., it is uncertain that the record of New Madrid events is complete for the period between 2350 B.C. and A.D. 300. As demonstrated by this study, information about other prehistoric earthquakes may be

  11. Depth dependence of earthquake frequency-magnitude distributions in California: Implications for rupture initiation

    USGS Publications Warehouse

    Mori, J.; Abercrombie, R.E.

    1997-01-01

    Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.

  12. Hovsgol earthquake 5 December 2014, M W = 4.9: seismic and acoustic effects

    NASA Astrophysics Data System (ADS)

    Dobrynina, Anna A.; Sankov, Vladimir A.; Tcydypova, Larisa R.; German, Victor I.; Chechelnitsky, Vladimir V.; Ulzibat, Munkhuu

    2018-03-01

    A moderate shallow earthquake occurred on 5 December 2014 ( M W = 4.9) in the north of Lake Hovsgol (northern Mongolia). The infrasonic signal with duration 140 s was recorded for this earthquake by the "Tory" infrasound array (Institute of Solar-Terrestrial Physics of the Siberian Branch of the Russian Academy of Science, Russia). Source parameters of the earthquake (seismic moment, geometrical sizes, displacement amplitudes in the focus) were determined using spectral analysis of direct body P and S waves. The spectral analysis of seismograms and amplitude variations of the surface waves allows to determine the effect of the propagation of the rupture in the earthquake focus, the azimuth of the rupture propagation direction and the velocity of displacement in the earthquake focus. The results of modelling of the surface displacements caused by the Hovsgol earthquake and high effective velocity of propagation of infrasound signal ( 625 m/s) indicate that its occurrence is not caused by the downward movement of the Earth's surface in the epicentral region but by the effect of the secondary source. The position of the secondary source of infrasound signal is defined on the northern slopes of the Khamar-Daban ridge according to the data on the azimuth and time of arrival of acoustic wave at the Tory station. The interaction of surface waves with the regional topography is proposed as the most probable mechanism of formation of the infrasound signal.

  13. Seismicity associated with the Sumatra-Andaman Islands earthquake of 26 December 2004

    USGS Publications Warehouse

    Dewey, J.W.; Choy, G.; Presgrave, B.; Sipkin, S.; Tarr, A.C.; Benz, H.; Earle, P.; Wald, D.

    2007-01-01

    The U.S. Geological Survey/National Earthquake Information Center (USGS/ NEIC) had computed origins for 5000 earthquakes in the Sumatra-Andaman Islands region in the first 36 weeks after the Sumatra-Andaman Islands mainshock of 26 December 2004. The cataloging of earthquakes of mb (USGS) 5.1 and larger is essentially complete for the time period except for the first half-day following the 26 December mainshock, a period of about two hours following the Nias earthquake of 28 March 2005, and occasionally during the Andaman Sea swarm of 26-30 January 2005. Moderate and larger (mb ???5.5) aftershocks are absent from most of the deep interplate thrust faults of the segments of the Sumatra-Andaman Islands subduction zone on which the 26 December mainshock occurred, which probably reflects nearly complete release of elastic strain on the seismogenic interplate-thrust during the mainshock. An exceptional thrust-fault source offshore of Banda Aceh may represent a segment of the interplate thrust that was bypassed during the mainshock. The 26 December mainshock triggered a high level of aftershock activity near the axis of the Sunda trench and the leading edge of the overthrust Burma plate. Much near-trench activity is intraplate activity within the subducting plate, but some shallow-focus, near-trench, reverse-fault earthquakes may represent an unusual seismogenic release of interplate compressional stress near the tip of the overriding plate. The interplate-thrust Nias earthquake of 28 March 2005, in contrast to the 26 December aftershock sequence, was followed by many interplate-thrust aftershocks along the length of its inferred rupture zone.

  14. Comparison of the sand liquefaction estimated based on codes and practical earthquake damage phenomena

    NASA Astrophysics Data System (ADS)

    Fang, Yi; Huang, Yahong

    2017-12-01

    Conducting sand liquefaction estimated based on codes is the important content of the geotechnical design. However, the result, sometimes, fails to conform to the practical earthquake damages. Based on the damage of Tangshan earthquake and engineering geological conditions, three typical sites are chosen. Moreover, the sand liquefaction probability was evaluated on the three sites by using the method in the Code for Seismic Design of Buildings and the results were compared with the sand liquefaction phenomenon in the earthquake. The result shows that the difference between sand liquefaction estimated based on codes and the practical earthquake damage is mainly attributed to the following two aspects: The primary reasons include disparity between seismic fortification intensity and practical seismic oscillation, changes of groundwater level, thickness of overlying non-liquefied soil layer, local site effect and personal error. Meanwhile, although the judgment methods in the codes exhibit certain universality, they are another reason causing the above difference due to the limitation of basic data and the qualitative anomaly of the judgment formulas.

  15. Earthquakes; March-April 1975

    USGS Publications Warehouse

    Person, W.J.

    1975-01-01

    There were no major earthquakes (magnitude 7.0-7.9) in March or April; however, there were earthquake fatalities in Chile, Iran, and Venezuela and approximately 35 earthquake-related injuries were reported around the world. In the United States a magnitude 6.0 earthquake struck the Idaho-Utah border region. Damage was estimated at about a million dollars. The shock was felt over a wide area and was the largest to hit the continental Untied States since the San Fernando earthquake of February 1971. 

  16. Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA

    NASA Astrophysics Data System (ADS)

    Lorito, S.

    2013-05-01

    The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit

  17. Protecting your family from earthquakes: The seven steps to earthquake safety

    USGS Publications Warehouse

    Developed by American Red Cross, Asian Pacific Fund

    2007-01-01

    This book is provided here because of the importance of preparing for earthquakes before they happen. Experts say it is very likely there will be a damaging San Francisco Bay Area earthquake in the next 30 years and that it will strike without warning. It may be hard to find the supplies and services we need after this earthquake. For example, hospitals may have more patients than they can treat, and grocery stores may be closed for weeks. You will need to provide for your family until help arrives. To keep our loved ones and our community safe, we must prepare now. Some of us come from places where earthquakes are also common. However, the dangers of earthquakes in our homelands may be very different than in the Bay Area. For example, many people in Asian countries die in major earthquakes when buildings collapse or from big sea waves called tsunami. In the Bay Area, the main danger is from objects inside buildings falling on people. Take action now to make sure your family will be safe in an earthquake. The first step is to read this book carefully and follow its advice. By making your home safer, you help make our community safer. Preparing for earthquakes is important, and together we can make sure our families and community are ready. English version p. 3-13 Chinese version p. 14-24 Vietnamese version p. 25-36 Korean version p. 37-48

  18. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of

  19. The evaluation of the earthquake hazard using the exponential distribution method for different seismic source regions in and around Ağrı

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr; Türker, Tuğba, E-mail: tturker@ktu.edu.tr

    The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Ağrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Ağrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Ağrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focalmore » mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Ağrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Ağrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude

  20. The evaluation of the earthquake hazard using the exponential distribution method for different seismic source regions in and around Aǧrı

    NASA Astrophysics Data System (ADS)

    Bayrak, Yusuf; Türker, Tuǧba

    2016-04-01

    The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Aǧrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Aǧrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Aǧrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focal mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Aǧrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Aǧrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158

  1. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  2. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  3. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  4. Statistical analysis of the ionospheric ion density recorded by DEMETER in the epicenter areas of earthquakes as well as in their magnetically conjugate point areas

    NASA Astrophysics Data System (ADS)

    Li, Mei; Parrot, Michel

    2018-02-01

    Results of a statistical variation of total ion density observed in the vicinity of epicenters as well as around magnetically conjugated points of earthquakes are presented in this paper. Two data sets are used: the ion density measured by DEMETER during about 6.5 years and the list of strong earthquakes (MW ≥ 4.8) occurring globally during this period (14,764 earthquakes in total). First of all, ionospheric perturbations with 23-120 s observation time corresponding to spatial scales of 160-840 km are automatically detected by a software (64,287 anomalies in total). Second, it is checked if a perturbation could be associated either with the epicenter of an earthquake or with its magnetically conjugated point (distance < 1500 km and time < 15 days before the earthquake). The index Kp < 3 is also considered in order to reduce the effect of the geomagnetic activity on the ionosphere during this period. The results show that it is possible to detect variations of the ionospheric parameters above the epicenter areas as well as above their conjugated points. About one third of the earthquakes are detected with ionospheric influence on both sides of the Earth. There is a trend showing that the perturbation length increases as the magnitude of the detected EQs but it is more obvious for large magnitude. The probability that a perturbation appears is higher on the day of the earthquake and then gradually decreases when the time before the earthquake increases. The spatial distribution of perturbations shows that the probability of perturbations appearing southeast of the epicenter before an earthquake is a little bit higher and that there is an obvious trend because perturbations appear west of the conjugated point of an earthquake.

  5. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  6. Identification of Deep Earthquakes

    DTIC Science & Technology

    2010-09-01

    discriminants that will reliably separate small, crustal earthquakes (magnitudes less than about 4 and depths less than about 40 to 50 km) from small...characteristics on discrimination plots designed to separate nuclear explosions from crustal earthquakes. Thus, reliably flagging these small, deep events is...Further, reliably identifying subcrustal earthquakes will allow us to eliminate deep events (previously misidentified as crustal earthquakes) from

  7. Relocation of micro-earthquakes in the Yeongdeok offshore area, Korea using local and Ocean bottom seismometers

    NASA Astrophysics Data System (ADS)

    HAN, M.; Kim, K. H.; Park, S. C.; Lin, P. P.; Chen, P.; Chang, H.; Jang, J. P.; Kuo, B. Y.; Liao, Y. C.

    2016-12-01

    Seismicity in the East Sea of Korea has been relatively high during the last four decades of instrumental earthquake observation period. Yeongdeok offshore area is probably the most seismically active area in the East Sea. This study analyzes seismic signals to detect micro-earthquakes and determine their precise earthquake hypocenters in the Yeoungdeok offshore area using data recorded by the Korea National Seismic Network (KNSN) and a temporary ocean bottom seismographic network (OBSN-PNU) operated by Korea Meteorological Administration and Pusan National University, respectively. Continuous waveform data recorded at four seismic stations in the study area of KNSN between January 2007 and July 2016 are inspected to detect any repeating earthquakes by applying a waveform cross-correlation detector. More than 1,600 events are triggered. Events outside the study area or in poor waveform quality are removed from further analysis. Approximately 500 earthquakes are selected, most of which have gone unreported because their magnitudes are lower than the detection threshold of the routine earthquake monitoring. Events in the study area are also under bad azimuthal coverage because all stations are located on land and thus biased to the west. OBSN-PNU comprised three ocean bottom seismometers and operated to observe micro-earthquakes in the study area between February and August 2016. The same technique applied to the KNSN data has been applied to the OBSN-PNU data to detect micro-earthquakes. Precise earthquake hypocenters are determined using phase arrival times and waveform similarities. Resultant hypocenters are clustered to form a few lineaments. They are compared to the local geological and geophysical features to understand micro-earthquake activity in the area.

  8. Relative Contributions of Geothermal Pumping and Long-Term Earthquake Rate to Seismicity at California Geothermal Fields

    NASA Astrophysics Data System (ADS)

    Weiser, D. A.; Jackson, D. D.

    2015-12-01

    In a tectonically active area, a definitive discrimination between geothermally-induced and tectonic earthquakes is difficult to achieve. We focus our study on California's 11 major geothermal fields: Amedee, Brawley, Casa Diablo, Coso, East Mesa, The Geysers, Heber, Litchfield, Salton Sea, Susanville, and Wendel. The Geysers geothermal field is the world's largest geothermal energy producer. California's Department of Oil Gas and Geothermal Resources provides field-wide monthly injection and production volumes for each of these sites, which allows us to study the relationship between geothermal pumping activities and seismicity. Since many of the geothermal fields began injecting and producing before nearby seismic stations were installed, we use smoothed seismicity since 1932 from the ANSS catalog as a proxy for tectonic earthquake rate. We examine both geothermal pumping and long-term earthquake rate as factors that may control earthquake rate. Rather than focusing only on the largest earthquake, which is essentially a random occurrence in time, we examine how M≥4 earthquake rate density (probability per unit area, time, and magnitude) varies for each field. We estimate relative contributions to the observed earthquake rate of M≥4 from both a long-term earthquake rate (Kagan and Jackson, 2010) and pumping activity. For each geothermal field, respective earthquake catalogs (NCEDC and SCSN) are complete above at least M3 during the test period (which we tailor to each site). We test the hypothesis that the observed earthquake rate at a geothermal site during the test period is a linear combination of the long-term seismicity and pumping rates. We use a grid search to determine the confidence interval of the weighting parameters.

  9. A new 1649-1884 catalog of destructive earthquakes near Tokyo and implications for the long-term seismic process

    USGS Publications Warehouse

    Grunewald, E.D.; Stein, R.S.

    2006-01-01

    In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.

  10. Initiatives to Reduce Earthquake Risk of Developing Countries

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2008-12-01

    The seventeen-year-and-counting history of the Palo Alto-based nonprofit organization GeoHazards International (GHI) is the story of many initiatives within a larger initiative to increase the societal impact of geophysics and civil engineering. GHI's mission is to reduce death and suffering due to earthquakes and other natural hazards in the world's most vulnerable communities through preparedness, mitigation and advocacy. GHI works by raising awareness in these communities about their risk and about affordable methods to manage it, identifying and strengthening institutions in these communities to manage their risk, and advocating improvement in natural disaster management. Some of GHI's successful initiatives include: (1) creating an earthquake scenario for Quito, Ecuador that describes in lay terms the consequences for that city of a probable earthquake; (2) improving the curricula of Pakistani university courses about seismic retrofitting; (3) training employees of the Public Works Department of Delhi, India on assessing the seismic vulnerability of critical facilities such as a school, a hospital, a police headquarters, and city hall; (4) assessing the vulnerability of the Library of Tibetan Works and Archives in Dharamsala, India; (5) developing a seismic hazard reduction plan for a nonprofit organization in Kathmandu, Nepal that works to manage Nepal's seismic risk; and (6) assisting in the formulation of a resolution by the Council of the Organization for Economic Cooperation and Development (OECD) to promote school earthquake safety among OECD member countries. GHI's most important resource, in addition to its staff and Board of Trustees, is its members and volunteer advisors, who include some of the world's leading earth scientists, earthquake engineers, urban planners and architects, from the academic, public, private and nonprofit sectors. GHI is planning several exciting initiatives in the near future. One would oversee the design and construction of

  11. Tests of remote aftershock triggering by small mainshocks using Taiwan's earthquake catalog

    NASA Astrophysics Data System (ADS)

    Peng, W.; Toda, S.

    2014-12-01

    To understand earthquake interaction and forecast time-dependent seismic hazard, it is essential to evaluate which stress transfer, static or dynamic, plays a major role to trigger aftershocks and subsequent mainshocks. Felzer and Brodsky focused on small mainshocks (2≤M<3) and their aftershocks, and then argued that only dynamic stress change brings earthquake-to-earthquake triggering, whereas Richards-Dingers et al. (2010) claimed that those selected small mainshock-aftershock pairs were not earthquake-to-earthquake triggering but simultaneous occurrence of independent aftershocks following a larger earthquake or during a significant swarm sequence. We test those hypotheses using Taiwan's earthquake catalog by taking the advantage of lacking any larger event and the absence of significant seismic swarm typically seen with active volcano. Using Felzer and Brodsky's method and their standard parameters, we only found 14 mainshock-aftershock pairs occurred within 20 km distance in Taiwan's catalog from 1994 to 2010. Although Taiwan's catalog has similar number of earthquakes as California's, the number of pairs is about 10% of the California catalog. It may indicate the effect of no large earthquakes and no significant seismic swarm in the catalog. To fully understand the properties in the Taiwan's catalog, we loosened the screening parameters to earn more pairs and then found a linear aftershock density with a power law decay of -1.12±0.38 that is very similar to the one in Felzer and Brodsky. However, none of those mainshock-aftershock pairs were associated with a M7 rupture event or M6 events. To find what mechanism controlled the aftershock density triggered by small mainshocks in Taiwan, we randomized earthquake magnitude and location. We then found that those density decay in a short time period is more like a randomized behavior than mainshock-aftershock triggering. Moreover, 5 out of 6 pairs were found in a swarm-like temporal seismicity rate increase

  12. A newly discovered historical earthquake or merely a chronological mistake? Report of Mary Eliza Rogers from October 10, 1856

    NASA Astrophysics Data System (ADS)

    Zohar, Motti

    2017-09-01

    In her nineteenth century book "Domestic Life in Palestine", Mary Eliza Rogers describes the earthquake she had experienced in Haifa in the middle of the night between the 10th and 11th of October 1856. Although she reports that the earthquake was well felt in northern Palestine and even slightly damaged structures in Haifa, she is the only one who reported the event. However, on the following night, a destructive earthquake that occurred close to Crete was reported by many contemporary sources. Rogers' report is not cited in any of the existing earthquake catalogs or literature. Thus, the question arises is whether what she reports is an earthquake that was unknown to us till now or perhaps she merely dated incorrectly the event she had experienced. To resolve this question, I interpreted her description of the moon phase at the time of the earthquake occurrence and concluded that Rogers incorrectly dated the event whereas in fact what she had experienced was the Crete earthquake. The rest of her description is probably reliable and reinforces reports of damaged localities along the coast of northern Palestine and southern Lebanon.

  13. A N-S fossil transform fault reactivated by the March 2, 2016 Mw7.8 southwest of Sumatra, Indonesia earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, H.; van der Lee, S.

    2016-12-01

    Warton Basin (WB) is characterized by N-S striking fossil transform faults and E-W trending extinct ridges. The 2016 Mw7.8 southwest of Sumatra earthquake, nearby the WB's center, was first imaged by back-projecting P-waves from three regional seismic networks in Europn, Japan, and Australia. Next, the rupture direction of the earthquake was further determined using the rupture directivity analysis to P-waves from the global seismic network (GSN). Finally, we inverting these GSN waveforms on a defined N-S striking vertical fault for a kinematic source model. The results show that the earthquake reactivates a 190 degree N-S striking vertical fossil transform fault and asymmetrically bilaterally ruptures a 65 km by 30 km asperity over 35 s. Specifically, the earthquake first bilaterally ruptures northward and southward at a speed of 1.0 km/s over the first 12 s, and then mainly rupture northward at a speed of 1.6 km/s. Compared with two previous M≥7.8 WB earthquakes, including the 2000 southern WB earthquake and 2012 Mw8.6 Sumatra earthquake, the lower seismic energy radiation efficiency and slower rupture velicity of the 2016 earthquake indicate the rupture of the earthquake is probably controlled by the warmer ambient slab and tectonic stress regime.

  14. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    PubMed

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  15. Lessons of L'Aquila for Operational Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2012-12-01

    and failures-to-predict. The best way to achieve this separation is to use probabilistic rather than deterministic statements in characterizing short-term changes in seismic hazards. The ICEF recommended establishing OEF systems that can provide the public with open, authoritative, and timely information about the short-term probabilities of future earthquakes. Because the public needs to be educated into the scientific conversation through repeated communication of probabilistic forecasts, this information should be made available at regular intervals, during periods of normal seismicity as well as during seismic crises. In an age of nearly instant information and high-bandwidth communication, public expectations regarding the availability of authoritative short-term forecasts are rapidly evolving, and there is a greater danger that information vacuums will spawn informal predictions and misinformation. L'Aquila demonstrates why the development of OEF capabilities is a requirement, not an option.

  16. The Occurrence of the Recent Deadly Mexico Earthquakes was not that Unexpected

    NASA Astrophysics Data System (ADS)

    Flores-Marquez, L.; Sarlis, N. V.; Skordas, E. S.; Varotsos, P.; Ramírez-Rojas, A.

    2017-12-01

    Most big Mexican earthquakes occur right along the interface between the colliding Cocos and North American plates, but the two recent deadly Mexico earthquakes, i.e., the magnitude 8.2 earthquake that struck the Mexico's Chiapas state on 7 September 2017 and the magnitude 7.1 earthquake that struck central Mexico, almost 12 days later, killing more than 400 people and reducing buildings to rubble in several States happened at two different spots in the flat-slab in the middle of the Cocos tectonic plate which is considered a geologically surprising area [1]. Here, upon considering a new type of analysis termed natural time, we show that their occurrence should not in principle puzzle scientists. Earthquakes may be considered as critical phenomena, see Ref. [2] and references therein and natural time analysis [3] uncovers an order parameter for seismicity. It has been shown [2] that the fluctuations of this order parameter exhibit a universal behavior with a probability density function (pdf), which is non-Gaussian having a left exponential tail [3]. Natural time analysis of seismicity in various tectonic regions of the Mexican Pacific Coast has been made in Ref.[4]. The study of the order parameter pdf for the Chiapas area as well as for the Guerrero area shows that the occurrence of large earthquakes in these two areas was not unexpected. References A. Witze, Deadly Mexico quakes not linked, Nature 549, 442 (2017). Varotsos PA, Sarlis NV, Skordas ES, Natural Time Analysis: The new view of time. Precursory Seismic Electric Signals, Earthquakes and other Complex Time-Series (Springer-Verlag, Berlin Heidelberg 2011) P. Varotsos et al., Similarity of fluctuations in correlated systems: the case of seismicity. Phys. Rev. E 72, 041103 (2005) A. Ramírez-Rojas and E.L. Flores-Márquez, Order parameter analysis of seismicity of the Mexican Pacific coast. Physica A, 392 2507 (2013)

  17. Effect of Sediments on Rupture Dynamics of Shallow Subduction Zone Earthquakes and Tsunami Generation

    NASA Astrophysics Data System (ADS)

    Ma, S.

    2011-12-01

    Low-velocity fault zones have long been recognized for crustal earthquakes by using fault-zone trapped waves and geodetic observations on land. However, the most pronounced low-velocity fault zones are probably in the subduction zones where sediments on the seafloor are being continuously subducted. In this study I focus on shallow subduction zone earthquakes; these earthquakes pose a serious threat to human society in their ability in generating large tsunamis. Numerous observations indicate that these earthquakes have unusually long rupture durations, low rupture velocities, and/or small stress drops near the trench. However, the underlying physics is unclear. I will use dynamic rupture simulations with a finite-element method to investigate the dynamic stress evolution on faults induced by both sediments and free surface, and its relations with rupture velocity and slip. I will also explore the effect of off-fault yielding of sediments on the rupture characteristics and seafloor deformation. As shown in Ma and Beroza (2008), the more compliant hanging wall combined with free surface greatly increases the strength drop and slip near the trench. Sediments in the subduction zone likely have a significant role in the rupture dynamics of shallow subduction zone earthquakes and tsunami generation.

  18. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.

  19. Crowdsourced earthquake early warning.

    PubMed

    Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L

    2015-04-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  20. Crowdsourced earthquake early warning

    PubMed Central

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167

  1. Earthquakes, November-December 1992

    USGS Publications Warehouse

    Person, W.J.

    1993-01-01

    There were two major earthquakes (7.0≤M<8.0) during the last two months of the year, a magntidue 7.5 earthquake on December 12 in the Flores region, Indonesia, and a magnitude 7.0 earthquake on December 20 in the Banda Sea. Earthquakes caused fatalities in China and Indonesia. The greatest number of deaths (2,500) for the year occurred in Indonesia. In Switzerland, six people were killed by an accidental explosion recoreded by seismographs. In teh United States, a magnitude 5.3 earthquake caused slight damage at Big Bear in southern California. 

  2. Crowdsourced earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  3. Statistical analysis of low-rise building damage caused by the San Fernando earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholl, R.E.

    1974-02-01

    An empirical investigation of damage to low-rise buildings in two selected control areas within Glendale, California, caused by the ground motion precipitated by the San Fernando earthquake of February 9, 1971 is summarized. The procedures for obtaining the appropriate data and the methodology used in deriving ground motion-damage relationships are described. Motion-damage relationships are derived for overall damage and for the most frequently damaged building components. Overall motion-damage relationships are expressed in terms of damage incidence (damage ratio) and damage cost (damage cost factor). The motion-damage relationships derived from the earthquake data are compared with similar data obtained for lou-risemore » buildings subjected to ground motion generated by an underground nuclear explosion. Overall comparison results show that for the same spectral acceleration, the earthquake caused slightly more damage. Differences in ground-motion characteristics for the two types of disturbances provide the most probable explanation for this discrepancy. (auth)« less

  4. The Bay Area Earthquake Cycle:A Paleoseismic Perspective

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Seitz, G.; Lienkaemper, J. J.; Dawson, T. E.; Hecker, S.; William, L.; Kelson, K.

    2001-12-01

    /SH/RC/NC/(SG?) sequence likely occurred subsequent to the penultimate San Andreas event. Although offset data, which reflect M, are limited, observations indicate that the penultimate SA event ruptured essentially the same fault length as 1906 (Schwartz et al, 1998). In addition, measured point-specific slip (RC, 1.8-2.3m; SG, 3.5-5m) and modeled average slip (SH, 1.9m) for the MREs indicate large magnitude earthquakes on the other regional faults. The major observation from the new paleoseismic data is that during a maximum interval of 176 years (1600 to 1776), significant seismic moment was released in the SFBR by large (M*6.7) surface-faulting earthquakes on the SA, RC, SH, NH, NC and possibly SG faults. This places an upper limit on the duration of San Andreas interaction effects (stress shadow) on the regional fault system. In fact, the interval between the penultimate San Andreas rupture and large earthquakes on other SFBR faults could have been considerably shorter. We are now 95 years out from the 1906 and the SFBR Working Group 99 probability time window extends to 2030, an interval of 124 years. The paleoearthquake data allow that within this amount of time following the penultimate San Andreas event one or more large earthquakes may have occurred on Bay Area faults. Longer paleoearthquake chronologies with more precise event dating in the SFBR and other locales provide the exciting potential for defining regional earthquake cycles and modeling long-term fault interactions.

  5. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  6. The effect of different intensity measures and earthquake directions on the seismic assessment of skewed highway bridges

    NASA Astrophysics Data System (ADS)

    Bayat, M.; Daneshjoo, F.; Nisticò, N.

    2017-01-01

    In this study the probable seismic behavior of skewed bridges with continuous decks under earthquake excitations from different directions is investigated. A 45° skewed bridge is studied. A suite of 20 records is used to perform an Incremental Dynamic Analysis (IDA) for fragility curves. Four different earthquake directions have been considered: -45°, 0°, 22.5°, 45°. A sensitivity analysis on different spectral intensity meas ures is presented; efficiency and practicality of different intensity measures have been studied. The fragility curves obtained indicate that the critical direction for skewed bridges is the skew direction as well as the longitudinal direction. The study shows the importance of finding the most critical earthquake in understanding and predicting the behavior of skewed bridges.

  7. Operational Earthquake Forecasting of Aftershocks for New England

    NASA Astrophysics Data System (ADS)

    Ebel, J.; Fadugba, O. I.

    2015-12-01

    Although the forecasting of mainshocks is not possible, recent research demonstrates that probabilistic forecasts of expected aftershock activity following moderate and strong earthquakes is possible. Previous work has shown that aftershock sequences in intraplate regions behave similarly to those in California, and thus the operational aftershocks forecasting methods that are currently employed in California can be adopted for use in areas of the eastern U.S. such as New England. In our application, immediately after a felt earthquake in New England, a forecast of expected aftershock activity for the next 7 days will be generated based on a generic aftershock activity model. Approximately 24 hours after the mainshock, the parameters of the aftershock model will be updated using the observed aftershock activity observed to that point in time, and a new forecast of expected aftershock activity for the next 7 days will be issued. The forecast will estimate the average number of weak, felt aftershocks and the average expected number of aftershocks based on the aftershock statistics of past New England earthquakes. The forecast also will estimate the probability that an earthquake that is stronger than the mainshock will take place during the next 7 days. The aftershock forecast will specify the expected aftershocks locations as well as the areas over which aftershocks of different magnitudes could be felt. The system will use web pages, email and text messages to distribute the aftershock forecasts. For protracted aftershock sequences, new forecasts will be issued on a regular basis, such as weekly. Initially, the distribution system of the aftershock forecasts will be limited, but later it will be expanded as experience with and confidence in the system grows.

  8. Perception of earthquake risk in Taiwan: effects of gender and past earthquake experience.

    PubMed

    Kung, Yi-Wen; Chen, Sue-Huei

    2012-09-01

    This study explored how individuals in Taiwan perceive the risk of earthquake and the relationship of past earthquake experience and gender to risk perception. Participants (n= 1,405), including earthquake survivors and those in the general population without prior direct earthquake exposure, were selected and interviewed through a computer-assisted telephone interviewing procedure using a random sampling and stratification method covering all 24 regions of Taiwan. A factor analysis of the interview data yielded a two-factor structure of risk perception in regard to earthquake. The first factor, "personal impact," encompassed perception of threat and fear related to earthquakes. The second factor, "controllability," encompassed a sense of efficacy of self-protection in regard to earthquakes. The findings indicated prior earthquake survivors and females reported higher scores on the personal impact factor than males and those with no prior direct earthquake experience, although there were no group differences on the controllability factor. The findings support that risk perception has multiple components, and suggest that past experience (survivor status) and gender (female) affect the perception of risk. Exploration of potential contributions of other demographic factors such as age, education, and marital status to personal impact, especially for females and survivors, is discussed. Future research on and intervention program with regard to risk perception are suggested accordingly. © 2012 Society for Risk Analysis.

  9. People’s perspectives and expectations on preparedness against earthquakes: Tehran case study

    PubMed Central

    Jahangiri, Katayoun; Izadkhah, Yasamin O; Montazeri, Ali; Hosseini, Mahmood

    2010-01-01

    Abstract: Background: Public education is one of the most important elements of earthquake preparedness. The present study identifies methods and appropriate strategies for public awareness and education on preparedness for earthquakes based on people's opinions in the city of Tehran. Methods: This was a cross-sectional study and a door-to-door survey of residents from 22 municipal districts in Tehran, the capital city of Iran. It involved a total of 1 211 individuals aged 15 and above. People were asked about different methods of public information and education, as well as the type of information needed for earthquake preparedness. Results: "Enforcing the building contractors' compliance with the construction codes and regulations" was ranked as the first priority by 33.4% of the respondents. Over 70% of the participants (71.7%) regarded TV as the most appropriate means of media communication to prepare people for an earthquake. This was followed by "radio" which was selected by 51.6% of respondents. Slightly over 95% of the respondents believed that there would soon be an earthquake in the country, and 80% reported that they obtained this information from "the general public". Seventy percent of the study population felt that news of an earthquake should be communicated through the media. However, over fifty (58%) of the participants believed that governmental officials and agencies are best qualified to disseminate information about the risk of an imminent earthquake. Just over half (50.8%) of the respondents argued that the authorities do not usually provide enough information to people about earthquakes and the probability of their occurrence. Besides seismologists, respondents thought astronauts (32%), fortunetellers (32.3%), religious figures (34%), meteorologists (23%), and paleontologists (2%) can correctly predict the occurrence of an earthquake. Furthermore, 88.6% listed aid centers, mosques, newspapers and TV as the most important sources of

  10. Earthquakes in Alaska

    USGS Publications Warehouse

    Haeussler, Peter J.; Plafker, George

    1995-01-01

    Earthquake risk is high in much of the southern half of Alaska, but it is not the same everywhere. This map shows the overall geologic setting in Alaska that produces earthquakes. The Pacific plate (darker blue) is sliding northwestward past southeastern Alaska and then dives beneath the North American plate (light blue, green, and brown) in southern Alaska, the Alaska Peninsula, and the Aleutian Islands. Most earthquakes are produced where these two plates come into contact and slide past each other. Major earthquakes also occur throughout much of interior Alaska as a result of collision of a piece of crust with the southern margin.

  11. Seismic quiescence in a frictional earthquake model

    NASA Astrophysics Data System (ADS)

    Braun, Oleg M.; Peyrard, Michel

    2018-04-01

    We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff model for earthquakes and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the model can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.

  12. Coseismic Stress Changes of the 2016 Mw 7.8 Kaikoura, New Zealand, Earthquake and Its Implication for Seismic Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Shan, B.; LIU, C.; Xiong, X.

    2017-12-01

    increments would raise the probability of earthquake occurrence on these faults.

  13. Earthquakes, November-December 1973

    USGS Publications Warehouse

    Person, W.J.

    1974-01-01

    Other parts of the world suffered fatalities and significant damage from earthquakes. In Iran, an earthquake killed one person, injured many, and destroyed a number of homes. Earthquake fatalities also occurred in the Azores and in Algeria. 

  14. FORESHOCK AND ATERSHOCK SEQUENCES OF SOME LARGE EARTHQUAKES IN THE REGION OF GREECE,

    DTIC Science & Technology

    or more foreshocks of magnitude larger than 3.8 occurred in forty per cent of the cases. The probability for an earthquake to be preceded by a large... foreshock not much smaller than the main shock is 10%. It is shown that some properties of the earth’s material in the aftershock region can be

  15. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  16. A Bayesian Approach to Real-Time Earthquake Phase Association

    NASA Astrophysics Data System (ADS)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  17. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  18. Comparative study of earthquake-related and non-earthquake-related head traumas using multidetector computed tomography

    PubMed Central

    Chu, Zhi-gang; Yang, Zhi-gang; Dong, Zhi-hui; Chen, Tian-wu; Zhu, Zhi-yu; Shao, Heng

    2011-01-01

    OBJECTIVE: The features of earthquake-related head injuries may be different from those of injuries obtained in daily life because of differences in circumstances. We aim to compare the features of head traumas caused by the Sichuan earthquake with those of other common head traumas using multidetector computed tomography. METHODS: In total, 221 patients with earthquake-related head traumas (the earthquake group) and 221 patients with other common head traumas (the non-earthquake group) were enrolled in our study, and their computed tomographic findings were compared. We focused the differences between fractures and intracranial injuries and the relationships between extracranial and intracranial injuries. RESULTS: More earthquake-related cases had only extracranial soft tissue injuries (50.7% vs. 26.2%, RR = 1.9), and fewer cases had intracranial injuries (17.2% vs. 50.7%, RR = 0.3) compared with the non-earthquake group. For patients with fractures and intracranial injuries, there were fewer cases with craniocerebral injuries in the earthquake group (60.6% vs. 77.9%, RR = 0.8), and the earthquake-injured patients had fewer fractures and intracranial injuries overall (1.5±0.9 vs. 2.5±1.8; 1.3±0.5 vs. 2.1±1.1). Compared with the non-earthquake group, the incidences of soft tissue injuries and cranial fractures combined with intracranial injuries in the earthquake group were significantly lower (9.8% vs. 43.7%, RR = 0.2; 35.1% vs. 82.2%, RR = 0.4). CONCLUSION: As depicted with computed tomography, the severity of earthquake-related head traumas in survivors was milder, and isolated extracranial injuries were more common in earthquake-related head traumas than in non-earthquake-related injuries, which may have been the result of different injury causes, mechanisms and settings. PMID:22012045

  19. Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie

    2014-05-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.

  20. Earthquakes, May-June 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    In the United States, a magnitude 5.8 earthquake in southern California on June 28 killed two people and caused considerable damage. Strong earthquakes hit Alaska on May 1 and May 30; the May 1 earthquake caused some minor damage. 

  1. The 1906 earthquake and a century of progress in understanding earthquakes and their hazards

    USGS Publications Warehouse

    Zoback, M.L.

    2006-01-01

    The 18 April 1906 San Francisco earthquake killed nearly 3000 people and left 225,000 residents homeless. Three days after the earthquake, an eight-person Earthquake Investigation Commission composed of 25 geologists, seismologists, geodesists, biologists and engineers, as well as some 300 others started work under the supervision of Andrew Lawson to collect and document physical phenomena related to the quake . On 31 May 1906, the commission published a preliminary 17-page report titled "The Report of the State Earthquake Investigation Commission". The report included the bulk of the geological and morphological descriptions of the faulting, detailed reports on shaking intensity, as well as an impressive atlas of 40 oversized maps and folios. Nearly 100 years after its publication, the Commission Report remains a model for post-earthquake investigations. Because the diverse data sets were so complete and carefully documented, researchers continue to apply modern analysis techniques to learn from the 1906 earthquake. While the earthquake marked a seminal event in the history of California, it served as impetus for the birth of modern earthquake science in the United States.

  2. Earthquakes; January-February, 1979

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    The first major earthquake (magnitude 7.0 to 7.9) of the year struck in southeastern Alaska in a sparsely populated area on February 28. On January 16, Iran experienced the first destructive earthquake of the year causing a number of casualties and considerable damage. Peru was hit by a destructive earthquake on February 16 that left casualties and damage. A number of earthquakes were experienced in parts of the Untied States, but only minor damage was reported. 

  3. Preliminary results on earthquake triggered landslides for the Haiti earthquake (January 2010)

    NASA Astrophysics Data System (ADS)

    van Westen, Cees; Gorum, Tolga

    2010-05-01

    This study presents the first results on an analysis of the landslides triggered by the Ms 7.0 Haiti earthquake that occurred on January 12, 2010 in the boundary region of the Pacific Plate and the North American plate. The fault is a left lateral strike slip fault with a clear surface expression. According to the USGS earthquake information the Enriquillo-Plantain Garden fault system has not produced any major earthquake in the last 100 years, and historical earthquakes are known from 1860, 1770, 1761, 1751, 1684, 1673, and 1618, though none of these has been confirmed in the field as associated with this fault. We used high resolution satellite imagery available for the pre and post earthquake situations, which were made freely available for the response and rescue operations. We made an interpretation of all co-seismic landslides in the epicentral area. We conclude that the earthquake mainly triggered landslide in the northern slope of the fault-related valley and in a number of isolated area. The earthquake apparently didn't trigger many visible landslides within the slum areas on the slopes in the southern part of Port-au-Prince and Carrefour. We also used ASTER DEM information to relate the landslide occurrences with DEM derivatives.

  4. How citizen seismology is transforming rapid public earthquake information and interactions between seismologists and society

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Steed, Robert; Mazet-Roux, Gilles; Roussel, Fréderic; Caroline, Etivant

    2015-04-01

    Historical earthquakes are only known to us through written recollections and so seismologists have a long experience of interpreting the reports of eyewitnesses, explaining probably why seismology has been a pioneer in crowdsourcing and citizen science. Today, Internet has been transforming this situation; It can be considered as the digital nervous system comprising of digital veins and intertwined sensors that capture the pulse of our planet in near real-time. How can both seismology and public could benefit from this new monitoring system? This paper will present the strategy implemented at Euro-Mediterranean Seismological Centre (EMSC) to leverage this new nervous system to detect and diagnose the impact of earthquakes within minutes rather than hours and how it transformed information systems and interactions with the public. We will show how social network monitoring and flashcrowds (massive website traffic increases on EMSC website) are used to automatically detect felt earthquakes before seismic detections, how damaged areas can me mapped through concomitant loss of Internet sessions (visitors being disconnected) and the benefit of collecting felt reports and geolocated pictures to further constrain rapid impact assessment of global earthquakes. We will also describe how public expectations within tens of seconds of ground shaking are at the basis of improved diversified information tools which integrate this user generated contents. A special attention will be given to LastQuake, the most complex and sophisticated Twitter QuakeBot, smartphone application and browser add-on, which deals with the only earthquakes that matter for the public: the felt and damaging earthquakes. In conclusion we will demonstrate that eyewitnesses are today real time earthquake sensors and active actors of rapid earthquake information.

  5. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  6. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  7. Sun, Moon and Earthquakes

    NASA Astrophysics Data System (ADS)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  8. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  9. Putting down roots in earthquake country-Your handbook for earthquakes in the Central United States

    USGS Publications Warehouse

    Contributors: Dart, Richard; McCarthy, Jill; McCallister, Natasha; Williams, Robert A.

    2011-01-01

    This handbook provides information to residents of the Central United States about the threat of earthquakes in that area, particularly along the New Madrid seismic zone, and explains how to prepare for, survive, and recover from such events. It explains the need for concern about earthquakes for those residents and describes what one can expect during and after an earthquake. Much is known about the threat of earthquakes in the Central United States, including where they are likely to occur and what can be done to reduce losses from future earthquakes, but not enough has been done to prepare for future earthquakes. The handbook describes such preparations that can be taken by individual residents before an earthquake to be safe and protect property.

  10. Important Earthquake Engineering Resources

    Science.gov Websites

    PEER logo Pacific Earthquake Engineering Research Center home about peer news events research Engineering Resources Site Map Search Important Earthquake Engineering Resources - American Concrete Institute Motion Observation Systems (COSMOS) - Consortium of Universities for Research in Earthquake Engineering

  11. Repeating Earthquakes Following an Mw 4.4 Earthquake Near Luther, Oklahoma

    NASA Astrophysics Data System (ADS)

    Clements, T.; Keranen, K. M.; Savage, H. M.

    2015-12-01

    An Mw 4.4 earthquake on April 16, 2013 near Luther, OK was one of the earliest M4+ earthquakes in central Oklahoma, following the Prague sequence in 2011. A network of four local broadband seismometers deployed within a day of the Mw 4.4 event, along with six Oklahoma netquake stations, recorded more than 500 aftershocks in the two weeks following the Luther earthquake. Here we use HypoDD (Waldhauser & Ellsworth, 2000) and waveform cross-correlation to obtain precise aftershock locations. The location uncertainty, calculated using the SVD method in HypoDD, is ~15 m horizontally and ~ 35 m vertically. The earthquakes define a near vertical, NE-SW striking fault plane. Events occur at depths from 2 km to 3.5 km within the granitic basement, with a small fraction of events shallower, near the sediment-basement interface. Earthquakes occur within a zone of ~200 meters thickness on either side of the best-fitting fault surface. We use an equivalency class algorithm to identity clusters of repeating events, defined as event pairs with median three-component correlation > 0.97 across common stations (Aster & Scott, 1993). Repeating events occur as doublets of only two events in over 50% of cases; overall, 41% of earthquakes recorded occur as repeating events. The recurrence intervals for the repeating events range from minutes to days, with common recurrence intervals of less than two minutes. While clusters occur in tight dimensions, commonly of 80 m x 200 m, aftershocks occur in 3 distinct ~2km x 2km-sized patches along the fault. Our analysis suggests that with rapidly deployed local arrays, the plethora of ~Mw 4 earthquakes occurring in Oklahoma and Southern Kansas can be used to investigate the earthquake rupture process and the role of damage zones.

  12. Source area of the 1858 earthquake swarm in the central Ryukyu Islands revealed by the observations of Father Louis Furet

    NASA Astrophysics Data System (ADS)

    Oda, Takuma; Nakamura, Mamoru

    2017-09-01

    We estimated the location and magnitude of earthquakes constituting the 1858 earthquake swarm in the central Ryukyu Islands using the felt earthquakes recorded by Father Louis Furet who lived in Naha, Okinawa Island, in the middle of the nineteenth century. First, we estimated the JMA seismic intensity of the earthquakes by interpreting the words used to describe the shaking. Next, using the seismic intensity and shaking duration of the felt earthquakes, we estimated the epicentral distance and magnitude range of three earthquakes in the swarm. The results showed that the epicentral distances of the earthquakes were 20-250 km and that magnitudes ranged between 4.5 and 6.5, with a strong correlation between epicentral distance and magnitude. Since the rumblings accompanying some earthquakes in the swarm were heard from a northward direction, the swarm probably occurred to the north of Naha. The most likely source area for the 1858 swarm is the central Okinawa Trough, where a similar swarm event occurred in 1980. If the 1858 swarm occurred in the central Okinawa Trough, the estimated maximum magnitude would have reached 6-7. In contrast, if the 1858 swarm occurred in the vicinity of Amami Island, which is the second most likely candidate area, it would have produced a cluster of magnitude 7-8 earthquakes.[Figure not available: see fulltext.

  13. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  14. Earthquakes, May-June 1981

    USGS Publications Warehouse

    Person, W.J.

    1981-01-01

    The months of May and June were somewhat quiet, seismically speaking. There was one major earthquake (7.0-7.9) off the west coast of South Island, New Zealand. The most destructive earthquake during this reporting period was in southern Iran on June 11 which caused fatalities and extensive damage. Peru also experienced a destructive earthquake on June 22 which caused fatalities and damage. In the United States, a number of earthquakes were experienced, but none caused significant damage. 

  15. ViscoSim Earthquake Simulator

    USGS Publications Warehouse

    Pollitz, Fred

    2012-01-01

    Synthetic seismicity simulations have been explored by the Southern California Earthquake Center (SCEC) Earthquake Simulators Group in order to guide long‐term forecasting efforts related to the Unified California Earthquake Rupture Forecast (Tullis et al., 2012a). In this study I describe the viscoelastic earthquake simulator (ViscoSim) of Pollitz, 2009. Recapitulating to a large extent material previously presented by Pollitz (2009, 2011) I describe its implementation of synthetic ruptures and how it differs from other simulators being used by the group.

  16. Application of a long-range forecasting model to earthquakes in the Japan mainland testing region

    NASA Astrophysics Data System (ADS)

    Rhoades, David A.

    2011-03-01

    The Every Earthquake a Precursor According to Scale (EEPAS) model is a long-range forecasting method which has been previously applied to a number of regions, including Japan. The Collaboratory for the Study of Earthquake Predictability (CSEP) forecasting experiment in Japan provides an opportunity to test the model at lower magnitudes than previously and to compare it with other competing models. The model sums contributions to the rate density from past earthquakes based on predictive scaling relations derived from the precursory scale increase phenomenon. Two features of the earthquake catalogue in the Japan mainland region create difficulties in applying the model, namely magnitude-dependence in the proportion of aftershocks and in the Gutenberg-Richter b-value. To accommodate these features, the model was fitted separately to earthquakes in three different target magnitude classes over the period 2000-2009. There are some substantial unexplained differences in parameters between classes, but the time and magnitude distributions of the individual earthquake contributions are such that the model is suitable for three-month testing at M ≥ 4 and for one-year testing at M ≥ 5. In retrospective analyses, the mean probability gain of the EEPAS model over a spatially smoothed seismicity model increases with magnitude. The same trend is expected in prospective testing. The Proximity to Past Earthquakes (PPE) model has been submitted to the same testing classes as the EEPAS model. Its role is that of a spatially-smoothed reference model, against which the performance of time-varying models can be compared.

  17. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  18. Earthquake damage orientation to infer seismic parameters in archaeological sites and historical earthquakes

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel

    2018-01-01

    Studies to provide information concerning seismic parameters and seismic sources of historical and archaeological seismic events are used to better evaluate the seismic hazard of a region. This is of especial interest when no surface rupture is recorded or the seismogenic fault cannot be identified. The orientation pattern of the earthquake damage (ED) (e.g., fallen columns, dropped key stones) that affected architectonic elements of cities after earthquakes has been traditionally used in historical and archaeoseismological studies to infer seismic parameters. However, in the literature depending on the authors, the parameters that can be obtained are contradictory (it has been proposed: the epicenter location, the orientation of the P-waves, the orientation of the compressional strain and the fault kinematics) and authors even question these relations with the earthquake damage. The earthquakes of Lorca in 2011, Christchurch in 2011 and Emilia Romagna in 2012 present an opportunity to measure systematically a large number and wide variety of earthquake damage in historical buildings (the same structures that are used in historical and archaeological studies). The damage pattern orientation has been compared with modern instrumental data, which is not possible in historical and archaeoseismological studies. From measurements and quantification of the orientation patterns in the studied earthquakes, it is observed that there is a systematic pattern of the earthquake damage orientation (EDO) in the proximity of the seismic source (fault trace) (<10 km). The EDO in these earthquakes is normal to the fault trend (±15°). This orientation can be generated by a pulse of motion that in the near fault region has a distinguishable acceleration normal to the fault due to the polarization of the S-waves. Therefore, the earthquake damage orientation could be used to estimate the seismogenic fault trend of historical earthquakes studies where no instrumental data are available.

  19. Earthquakes, March-April, 1993

    USGS Publications Warehouse

    Person, Waverly J.

    1993-01-01

    Worldwide, only one major earthquake (7.0earthquake, a magnitude 7.2 shock, struck the Santa Cruz Islands region in the South Pacific on March 6. Earthquake-related deaths occurred in the Fiji Islands, China, and Peru.

  20. 2016 National Earthquake Conference

    Science.gov Websites

    Thank you to our Presenting Sponsor, California Earthquake Authority. What's New? What's Next ? What's Your Role in Building a National Strategy? The National Earthquake Conference (NEC) is a , state government leaders, social science practitioners, U.S. State and Territorial Earthquake Managers

  1. The music of earthquakes and Earthquake Quartet #1

    USGS Publications Warehouse

    Michael, Andrew J.

    2013-01-01

    Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.

  2. Earthquake triggering in southeast Africa following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo

    2018-02-01

    In this paper we present evidence of earthquake dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the rate of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean earthquake. We found evidence of triggered local earthquakes and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local earthquakes using β-statistics. Statistically significant dynamic triggering of local earthquakes was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie earthquake triggering.

  3. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  4. The most recent large earthquake on the Rodgers Creek fault, San Francisco bay area

    USGS Publications Warehouse

    Hecker, S.; Pantosti, D.; Schwartz, D.P.; Hamilton, J.C.; Reidy, L.M.; Powers, T.J.

    2005-01-01

    The Rodgers Creek fault (RCF) is a principal component of the San Andreas fault system north of San Francisco. No evidence appears in the historical record of a large earthquake on the RCF, implying that the most recent earthquake (MRE) occurred before 1824, when a Franciscan mission was built near the fault at Sonoma, and probably before 1776, when a mission and presidio were built in San Francisco. The first appearance of nonnative pollen in the stratigraphic record at the Triangle G Ranch study site on the south-central reach of the RCF confirms that the MRE occurred before local settlement and the beginning of livestock grazing. Chronological modeling of earthquake age using radiocarbon-dated charcoal from near the top of a faulted alluvial sequence at the site indicates that the MRE occurred no earlier than A.D. 1690 and most likely occurred after A.D. 1715. With these age constraints, we know that the elapsed time since the MRE on the RCF is more than 181 years and less than 315 years and is probably between 229 and 290 years. This elapsed time is similar to published recurrence-interval estimates of 131 to 370 years (preferred value of 230 years) and 136 to 345 years (mean of 205 years), calculated from geologic data and a regional earthquake model, respectively. Importantly, then, the elapsed time may have reached or exceeded the average recurrence time for the fault. The age of the MRE on the RCF is similar to the age of prehistoric surface rupture on the northern and southern sections of the Hayward fault to the south. This suggests possible rupture scenarios that involve simultaneous rupture of the Rodgers Creek and Hayward faults. A buried channel is offset 2.2 (+ 1.2, - 0.8) m along one side of a pressure ridge at the Triangle G Ranch site. This provides a minimum estimate of right-lateral slip during the MRE at this location. Total slip at the site may be similar to, but is probably greater than, the 2 (+ 0.3, - 0.2) m measured previously at the

  5. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  6. Historical earthquake research in Austria

    NASA Astrophysics Data System (ADS)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  7. Unraveling earthquake stresses: Insights from dynamically triggered and induced earthquakes

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Alfaro-Diaz, R. A.

    2017-12-01

    Induced seismicity, earthquakes caused by anthropogenic activity, has more than doubled in the last several years resulting from practices related to oil and gas production. Furthermore, large earthquakes have been shown to promote the triggering of other events within two fault lengths (static triggering), due to static stresses caused by physical movement along the fault, and also remotely from the passage of seismic waves (dynamic triggering). Thus, in order to understand the mechanisms for earthquake failure, we investigate regions where natural, induced, and dynamically triggered events occur, and specifically target Oklahoma. We first analyze data from EarthScope's USArray Transportable Array (TA) and local seismic networks implementing an optimized (STA/LTA) detector in order to develop local detection and earthquake catalogs. After we identify triggered events through statistical analysis, and perform a stress analysis to gain insight on the stress-states leading to triggered earthquake failure. We use our observations to determine the role of different transient stresses in contributing to natural and induced seismicity by comparing these stresses to regional stress orientation. We also delineate critically stressed regions of triggered seismicity that may indicate areas susceptible to earthquake hazards associated with sustained fluid injection in provinces of induced seismicity. Anthropogenic injection and extraction activity can alter the stress state and fluid flow within production basins. By analyzing the stress release of these ancient faults caused by dynamic stresses, we may be able to determine if fluids are solely responsible for increased seismic activity in induced regions.

  8. Hiding earthquakes from scrupulous monitoring eyes of dense local seismic networks

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Kiser, E.

    2012-12-01

    Accurate and complete cataloguing of aftershocks is essential for a variety of purposes, including the estimation of the mainshock rupture area, the identification of seismic gaps, and seismic hazard assessment. However, immediately following large earthquakes, the seismograms recorded by local networks are noisy, with energy arriving from hundreds of aftershocks, in addition to different seismic phases interfering with one another. This causes deterioration in the performance of detection and location of earthquakes using conventional methods such as the S-P approach. This is demonstrated by results of back-projection analysis of teleseismic data showing that a significant number of events are undetected by the Japan Meteorological Agency, within the first twenty-four hours after the Mw9.0 Tohoku-oki, Japan earthquake. The spatial distribution of the hidden events is not arbitrary. Most of these earthquakes are located close to the trench, while some are located at the outer rise. Furthermore, there is a relatively sharp trench-parallel boundary separating the detected and undetected events. We investigate the cause of these hidden earthquakes using forward modeling. The calculation of raypaths for various source locations and takeoff angles with the "shooting" method suggests that this phenomenon is a consequence of the complexities associated with subducting slab. Laterally varying velocity structure defocuses the seismic energy from shallow earthquakes located near the trench and makes the observation of P and S arrivals difficult at stations situated on mainland Japan. Full waveform simulations confirm these results. Our forward calculations also show that the probability of detection is sensitive to the depth of the event. Shallower events near the trench are more difficult to detect than deeper earthquakes that are located inside the subducting plate for which the shadow-zone effect diminishes. The modeling effort is expanded to include three

  9. Causes of earthquake spatial distribution beneath the Izu-Bonin-Mariana Arc

    NASA Astrophysics Data System (ADS)

    Kong, Xiangchao; Li, Sanzhong; Wang, Yongming; Suo, Yanhui; Dai, Liming; Géli, Louis; Zhang, Yong; Guo, Lingli; Wang, Pengcheng

    2018-01-01

    Statistics about the occurrence frequency of earthquakes (1973-2015) at shallow, intermediate and great depths along the Izu-Bonin-Mariana (IBM) Arc is presented and a percent perturbation relative to P-wave mean value (LLNL-G3Dv3) is adopted to show the deep structure. The correlation coefficient between the subduction rate and the frequency of shallow seismic events along the IBM is 0.605, proving that the subduction rate is an important factor for shallow seismic events. The relationship between relief amplitudes of the seafloor and earthquake occurrences implies that some seamount chains riding on the Pacific seafloor may have an effect on intermediate-depth seismic events along the IBM. A probable hypothesis is proposed that the seamounts or surrounding seafloor with high degree of fracture may bring numerous hydrous minerals into the deep and may result in a different thermal structure compared to the seafloor where no seamounts are subducted. Fluids from the seamounts or surrounding seafloor are released to trigger earthquakes at intermediate-depth. Deep events in the northern and southern Mariana arc are likely affected by a horizontal propagating tear parallel to the trench.

  10. Tectonic Implications of Intermediate-depth Earthquakes Beneath the Northeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mejia, H.; Pulliam, J.; Huerfano, V.; Polanco Rivera, E.

    2016-12-01

    The Caribbean-North American plate boundary transitions from normal subduction beneath the Lesser Antilles to oblique subduction at Hispaniola before becoming exclusively transform at Cuba. In the Greater Antilles, large earthquakes occur all along the plate boundary at shallow depths but intermediate-depth earthquakes (50-200 km focal depth) occur almost uniquely beneath eastern Hispaniola. Previous studies have suggested that regional tectonics may be dominated by, for example, opposing subducting slabs, tearing of the subducting North American slab, or "slab push" by the NA slab. In addition, the Bahamas Platform, located north of Hispaniola, is likely causing compressive stresses and clockwise rotation of the island. A careful examination of focal mechanisms of intermediate-depth earthquakes could clarify regional tectonics but seismic stations in the region have historically been sparse, so constraints on earthquake depths and focal mechanisms have been poor. In response, fifteen broadband sensors were deployed in the Dominican Republic in 2014, increasing the number of stations to twenty-two. To determine the roles earthquakes play in regional tectonics, a event catalog was created joining data from our stations and other regional stations for which event depths are greater than 50 km and magnitudes are greater than 3.5. All events have been relocated and focal mechanisms are presented for as many events as possible. Multiple probable fault planes are computed for each event. Compressive (P) and tensional (T) axes, from fault planes, are plotted in 3-dimensions with density distribution contours determined of each axis. Examining relationships between axes distributions and events helps constrain tectonic stresses at intermediate-depths beneath eastern Hispaniola. A majority of events show primary compressive axes oriented in a north-south direction, likely produced by collision with the Bahamas Platform.

  11. Earthquake triggering by seismic waves following the landers and hector mine earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.A.; Bodin, P.; Harris, R.A.

    2001-01-01

    The proximity and similarity of the 1992, magnitude 7.3 Landers and 1999, magnitude 7.1 Hector Mine earthquakes in California permit testing of earthquake triggering hypotheses not previously possible. The Hector Mine earthquake confirmed inferences that transient, oscillatory 'dynamic' deformations radiated as seismic waves can trigger seismicity rate increases, as proposed for the Landers earthquake1-6. Here we quantify the spatial and temporal patterns of the seismicity rate changes7. The seismicity rate increase was to the north for the Landers earthquake and primarily to the south for the Hector Mine earthquake. We suggest that rupture directivity results in elevated dynamic deformations north and south of the Landers and Hector Mine faults, respectively, as evident in the asymmetry of the recorded seismic velocity fields. Both dynamic and static stress changes seem important for triggering in the near field with dynamic stress changes dominating at greater distances. Peak seismic velocities recorded for each earthquake suggest the existence of, and place bounds on, dynamic triggering thresholds. These thresholds vary from a few tenths to a few MPa in most places, depend on local conditions, and exceed inferred static thresholds by more than an order of magnitude. At some sites, the onset of triggering was delayed until after the dynamic deformations subsided. Physical mechanisms consistent with all these observations may be similar to those that give rise to liquefaction or cyclic fatigue.

  12. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  13. Extending earthquakes' reach through cascading.

    PubMed

    Marsan, David; Lengliné, Olivier

    2008-02-22

    Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.

  14. Earthquake Catalogue of the Caucasus

    NASA Astrophysics Data System (ADS)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  15. Earthquakes, July-August 1992

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    There were two major earthquakes (7.0≤M<8.0) during this reporting period. A magnitude 7.5 earthquake occurred in Kyrgyzstan on August 19 and a magnitude 7.0 quake struck the Ascension Island region on August 28. In southern California, aftershocks of the magnitude 7.6 earthquake on June 28, 1992, continued. One of these aftershocks caused damage and injuries, and at least one other aftershock caused additional damage. Earthquake-related fatalities were reportred in Kyrgzstan and Pakistan. 

  16. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  17. Geologic Evidence of Earthquakes and Tsunamis in the Mexican Subduction zone - Guerrero

    NASA Astrophysics Data System (ADS)

    Ramirez-Herrera, M.; Lagos, M.; Hutchinson, I.; Ruiz-Fernández, A.; Machain, M.; Caballero, M.; Rangel, V.; Nava, H.; Corona, N.; Bautista, F.; Kostoglodov, V.; Goguitchaichrili, A.; Morales, J.; Quintana, P.

    2010-12-01

    A study of large historic and prehistoric earthquakes and their tsunamis using a multiproxy approach (geomorphic features, sediment deposits, microfossils, sediment geochemistry and more recently the use of magnetic properties) has provided valuable information in the assessment of earthquake and tsunami record. The Pacific coast of Mexico is located over the active subduction zone (~1000 km) that has experienced numerous large magnitude earthquakes in historical time (Mw>7.5), and more than 50 documented tsunamis since 1732. Geomorphic and stratigraphic studies through test pits at 13 sites on the Guerrero coast reveal distinct stratigraphic changes with depth, indicating clear rapid change in depositional environments over time. Microfossil ecology (diatoms and foraminifera), sediment geochemistry (concentration increment in elements such as Sr, Ba, Ca, P, Si, K), stratigraphy, sediment magnetic properties (magnetic susceptibility anisotropy for the first time applied in tsunami deposits identification) and other proxies are indicative of sudden changes in land level and tsunami deposits. Buried evidence of liquefaction confirms the occurrence of a large earthquake at Barra de Potosi and Ixtapa, Guerrero. Preliminary 210Pb analysis suggests a sedimentation rate of ca. 0.1±0.01 cm/year and an estimated minimum age of ~ 100 years (maximum age at ca. 450 years?) for the most recent earthquake. At least three large events can be recognized by sharp contacts and sand layers in the sedimentary record. Ongoing C14, OSL and 210Pb dating will constrain the timing of these events. Deposits from three marine inwash events (tsunamis) dating from the past 4600 years have been identified on the Guerrero coast. A near-surface sand bed with a sharp basal contact overlying soil at sites near Ixtapa and Barra de Potosi most probably marks the tsunami following the 1985 Mw 8.2 earthquake. Interviews with Barra de Potosi fishermen and locals corroborate that these sites were

  18. Revealing the deformational anomalies based on GNSS data in relation to the preparation and stress release of large earthquakes

    NASA Astrophysics Data System (ADS)

    Kaftan, V. I.; Melnikov, A. Yu.

    2018-01-01

    The results of Global Navigational Satellite System (GNSS) observations in the regions of large earthquakes are analyzed. The characteristics of the Earth's surface deformations before, during, and after the earthquakes are considered. The obtained results demonstrate the presence of anomalous deformations close to the epicenters of the events. Statistical estimates of the anomalous strains and their relationship with measurement errors are obtained. Conclusions are drawn about the probable use of local GNSS networks to assess the risk of the occurrence of strong seismic events.

  19. Seismic potential for large and great interplate earthquakes along the Chilean and Southern Peruvian Margins of South America: A quantitative reappraisal

    NASA Astrophysics Data System (ADS)

    Nishenko, Stuart P.

    1985-04-01

    The seismic potential of the Chilean and southern Peruvian margins of South America is reevaluated to delineate those areas or segments of the margin that may be expected to experience large or great interplate earthquakes within the next 20 years (1984-2004). Long-term estimates of seismic potential (or the conditional probability of recurrence within a specified period of time) are based on (1) statistical analysis of historic repeat time data using Weibull distributions and (2) deterministic estimates of recurrence times based on the time-predictable model of earthquake recurrence. Both methods emphasize the periodic nature of large and great earthquake recurrence, and are compared with estimates of probability based on the assumption of Poisson-type behavior. The estimates of seismic potential presented in this study are long-term forecasts only, as the temporal resolution (or standard deviation) of both methods is taken to range from ±15% to ±25% of the average or estimated repeat time. At present, the Valparaiso region of central Chile (32°-35°S) has a high potential or probability of recurrence in the next 20 years. Coseismic uplift data associated with previous shocks in 1822 and 1906 suggest that this area may have already started to rerupture in 1971-1973. Average repeat times also suggest this area is due for a great shock within the next 20 years. Flanking segments of the Chilean margin, Coquimbo-Illapel (30°-32°S) and Talca-Concepcion (35°-38°S), presently have poorly constrained but possibly quite high potentials for a series of large or great shocks within the next 20 years. In contrast, the rupture zone of the great 1960 earthquake (37°-46°S) has the lowest potential along the margin and is not expected to rerupture in a great earthquake within the next 100 years. In the north, the seismic potentials of the Mollendo-Arica (17°-18°S) and Arica-Antofagasta (18°-24°S) segments (which last ruptured during great earthquakes in 1868 and 1877

  20. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  1. Earthquake and Tsunami booklet based on two Indonesia earthquakes

    NASA Astrophysics Data System (ADS)

    Hayashi, Y.; Aci, M.

    2014-12-01

    Many destructive earthquakes occurred during the last decade in Indonesia. These experiences are very important precepts for the world people who live in earthquake and tsunami countries. We are collecting the testimonies of tsunami survivors to clarify successful evacuation process and to make clear the characteristic physical behaviors of tsunami near coast. We research 2 tsunami events, 2004 Indian Ocean tsunami and 2010 Mentawai slow earthquake tsunami. Many video and photographs were taken by people at some places in 2004 Indian ocean tsunami disaster; nevertheless these were few restricted points. We didn't know the tsunami behavior in another place. In this study, we tried to collect extensive information about tsunami behavior not only in many places but also wide time range after the strong shake. In Mentawai case, the earthquake occurred in night, so there are no impressive photos. To collect detail information about evacuation process from tsunamis, we contrived the interview method. This method contains making pictures of tsunami experience from the scene of victims' stories. In 2004 Aceh case, all survivors didn't know tsunami phenomena. Because there were no big earthquakes with tsunami for one hundred years in Sumatra region, public people had no knowledge about tsunami. This situation was highly improved in 2010 Mentawai case. TV programs and NGO or governmental public education programs about tsunami evacuation are widespread in Indonesia. Many people know about fundamental knowledge of earthquake and tsunami disasters. We made drill book based on victim's stories and painted impressive scene of 2 events. We used the drill book in disaster education event in school committee of west Java. About 80 % students and teachers evaluated that the contents of the drill book are useful for correct understanding.

  2. Limiting the effects of earthquakes on gravitational-wave interferometers

    USGS Publications Warehouse

    Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew

    2017-01-01

    Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.

  3. Limiting the effects of earthquakes on gravitational-wave interferometers

    NASA Astrophysics Data System (ADS)

    Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew

    2017-02-01

    Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.

  4. Relocalizing a historical earthquake using recent methods: The 10 November 1935 Earthquake near Montserrat, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Niemz, P.; Amorèse, D.

    2016-03-01

    This study investigates the hypothesis of Feuillet et al. (2011) that the hypocenter of the seismic event on November 10, 1935 near Montserrat, Lesser Antilles (MS 6 1/4) (Gutenberg and Richter, 1954) was mislocated by other authors and is actually located in the Montserrat-Havers fault zone. While this proposal was based both on a Ground Motion Prediction Equation and on the assumption that earthquakes in this region are bound to prominent fault systems, our study relies on earthquake localization methods using arrival times of the International Seismological Summary (ISS). Results of our methodology suggest that the hypocenter was really located at 16.90° N, 62.53° W. This solution is about 25 km north-west of the location proposed by Feuillet et al. (2011) within the Redonda fault system, northward of the Montserrat-Havers fault zone. As depth phases that contribute valuable insights to the focal depth are not included in the ISS data set and the reassociation of these phases is difficult, the error in depth is high. Taking into account tectonic constraints and the vertical extend of NonLinLoc's uncertainty area of the preferred solution we assume that the focus is most probably in the lower crust between 20 km and the Moho. Our approach shows that the information of the ISS can lead to a reliable solution even without an exhaustive search for seismograms and station bulletins. This is encouraging for a better assessment of seismic and tsunami hazard in the Caribbean, Mexico, South and Central America, where many moderate to large earthquakes occurred in the first half of the 20th century. The limitations during this early phase of seismology which complicate such relocations are described in detail in this study.

  5. Earthquakes: hydrogeochemical precursors

    USGS Publications Warehouse

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  6. Earthquakes, September-October 1993

    USGS Publications Warehouse

    Person, W.J.

    1993-01-01

    The fatalities in the United States were caused by two earthquakes in southern Oregon on September 21. These earthquakes, both with magnitude 6.0 and separated in time by about 2 hrs, led to the deaths of two people. One of these deaths was apparently due to a heart attack induced by the earthquake

  7. Children's Ideas about Earthquakes

    ERIC Educational Resources Information Center

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  8. Do Earthquakes Shake Stock Markets?

    PubMed

    Ferreira, Susana; Karali, Berna

    2015-01-01

    This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan.

  9. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  10. Sedimentary Signatures of Submarine Earthquakes: Deciphering the Extent of Sediment Remobilization from the 2011 Tohoku Earthquake and Tsunami and 2010 Haiti Earthquake

    NASA Astrophysics Data System (ADS)

    McHugh, C. M.; Seeber, L.; Moernaut, J.; Strasser, M.; Kanamatsu, T.; Ikehara, K.; Bopp, R.; Mustaque, S.; Usami, K.; Schwestermann, T.; Kioka, A.; Moore, L. M.

    2017-12-01

    The 2004 Sumatra-Andaman Mw9.3 and the 2011 Tohoku (Japan) Mw9.0 earthquakes and tsunamis were huge geological events with major societal consequences. Both were along subduction boundaries and ruptured portions of these boundaries that had been deemed incapable of such events. Submarine strike-slip earthquakes, such as the 2010 Mw7.0 in Haiti, are smaller but may be closer to population centers and can be similarly catastrophic. Both classes of earthquakes remobilize sediment and leave distinct signatures in the geologic record by a wide range of processes that depends on both environment and earthquake characteristics. Understanding them has the potential of greatly expanding the record of past earthquakes, which is critical for geohazard analysis. Recent events offer precious ground truth about the earthquakes and short-lived radioisotopes offer invaluable tools to identify sediments they remobilized. In the 2011 Mw9 Japan earthquake they document the spatial extent of remobilized sediment from water depths of 626m in the forearc slope to trench depths of 8000m. Subbottom profiles, multibeam bathymetry and 40 piston cores collected by the R/V Natsushima and R/V Sonne expeditions to the Japan Trench document multiple turbidites and high-density flows. Core tops enriched in xs210Pb,137Cs and 134Cs reveal sediment deposited by the 2011 Tohoku earthquake and tsunami. The thickest deposits (2m) were documented on a mid-slope terrace and trench (4000-8000m). Sediment was deposited on some terraces (600-3000m), but shed from the steep forearc slope (3000-4000m). The 2010 Haiti mainshock ruptured along the southern flank of Canal du Sud and triggered multiple nearshore sediment failures, generated turbidity currents and stirred fine sediment into suspension throughout this basin. A tsunami was modeled to stem from both sediment failures and tectonics. Remobilized sediment was tracked with short-lived radioisotopes from the nearshore, slope, in fault basins including the

  11. Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones

    USGS Publications Warehouse

    Choy, G.L.; Kirby, S.H.

    2004-01-01

    earthquakes occurring on mature faults. We have identified earthquake pairs in which an interplate-thrust and an intraslab-normal earthquake occurred remarkably close in space and time. The intraslab-normal member of each pair radiated anomalously high amounts of energy compared to its thrust-fault counterpart. These intraslab earthquakes probably ruptured intact slab mantle and are dramatic examples in which Mc (an energy magnitude) is shown to be a far better estimate of the potential for earthquake damage than Mw. This discovery may help explain why loss of life as a result of intraslab earthquakes was greater in the 20th century in Latin America than the fatalities associated with interplate-thrust events that represented much higher total moment release. ?? 2004 RAS.

  12. Estimating the empirical probability of submarine landslide occurrence

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  13. Earthquakes, September-October 1978

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    The months of September and October were somewhat quiet seismically speaking. One major earthquake, magnitude (M) 7.7 occurred in Iran on September 16. In Germany, a magntidue 5.0 earthquake caused damage and considerable alarm to many people in parts of that country. In the United States, the largest earthquake occurred along the California-Nevada border region. 

  14. Earthquakes, March-April 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    Two major earthquakes (7.0-7.9) occurred during this reporting period: a magnitude 7.6 in Costa Rica on April 22 and a magntidue 7.0 in the USSR on April 29. Destructive earthquakes hit northern Peru on April 4 and 5. There were no destructive earthquakes in the United States during this period. 

  15. Can We Predict Earthquakes?

    ScienceCinema

    Johnson, Paul

    2018-01-16

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  16. Earthquake and Schools. [Videotape].

    ERIC Educational Resources Information Center

    Federal Emergency Management Agency, Washington, DC.

    Designing schools to make them more earthquake resistant and protect children from the catastrophic collapse of the school building is discussed in this videotape. It reveals that 44 of the 50 U.S. states are vulnerable to earthquake, but most schools are structurally unprepared to take on the stresses that earthquakes exert. The cost to the…

  17. Earthquakes, September-October 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    There were two major earthquakes (7.0-7.9) during this reporting period. the first was in the Solomon Islands on October 14 and the second was in India on October 19. Earthquake-related deaths were reported in Guatemala and India. Htere were no significant earthquakes in the United States during the period covered in this report. 

  18. Earthquakes in Arkansas and vicinity 1699-2010

    USGS Publications Warehouse

    Dart, Richard L.; Ausbrooks, Scott M.

    2011-01-01

    This map summarizes approximately 300 years of earthquake activity in Arkansas. It is one in a series of similar State earthquake history maps. Work on the Arkansas map was done in collaboration with the Arkansas Geological Survey. The earthquake data plotted on the map are from several sources: the Arkansas Geological Survey, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Mississippi Department of Environmental Quality. In addition to earthquake locations, other materials presented include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Arkansas and parts of adjacent states. Arkansas has undergone a number of significant felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Arkansas and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.

  19. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  20. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  1. Theory of earthquakes interevent times applied to financial markets

    NASA Astrophysics Data System (ADS)

    Jagielski, Maciej; Kutner, Ryszard; Sornette, Didier

    2017-10-01

    We analyze the probability density function (PDF) of waiting times between financial loss exceedances. The empirical PDFs are fitted with the self-excited Hawkes conditional Poisson process with a long power law memory kernel. The Hawkes process is the simplest extension of the Poisson process that takes into account how past events influence the occurrence of future events. By analyzing the empirical data for 15 different financial assets, we show that the formalism of the Hawkes process used for earthquakes can successfully model the PDF of interevent times between successive market losses.

  2. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  3. Post-earthquake building safety inspection: Lessons from the Canterbury, New Zealand, earthquakes

    USGS Publications Warehouse

    Marshall, J.; Jaiswal, Kishor; Gould, N.; Turner, F.; Lizundia, B.; Barnes, J.

    2013-01-01

    The authors discuss some of the unique aspects and lessons of the New Zealand post-earthquake building safety inspection program that was implemented following the Canterbury earthquake sequence of 2010–2011. The post-event safety assessment program was one of the largest and longest programs undertaken in recent times anywhere in the world. The effort engaged hundreds of engineering professionals throughout the country, and also sought expertise from outside, to perform post-earthquake structural safety inspections of more than 100,000 buildings in the city of Christchurch and the surrounding suburbs. While the building safety inspection procedure implemented was analogous to the ATC 20 program in the United States, many modifications were proposed and implemented in order to assess the large number of buildings that were subjected to strong and variable shaking during a period of two years. This note discusses some of the key aspects of the post-earthquake building safety inspection program and summarizes important lessons that can improve future earthquake response.

  4. Probabilistic Appraisal of Earthquake Hazard Parameters Deduced from a Bayesian Approach in the Northwest Frontier of the Himalayas

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Tsapanos, T. M.; Bayrak, Yusuf; Koravos, G. Ch.

    2013-03-01

    A straightforward Bayesian statistic is applied in five broad seismogenic source zones of the northwest frontier of the Himalayas to estimate the earthquake hazard parameters (maximum regional magnitude M max, β value of G-R relationship and seismic activity rate or intensity λ). For this purpose, a reliable earthquake catalogue which is homogeneous for M W ≥ 5.0 and complete during the period 1900 to 2010 is compiled. The Hindukush-Pamir Himalaya zone has been further divided into two seismic zones of shallow ( h ≤ 70 km) and intermediate depth ( h > 70 km) according to the variation of seismicity with depth in the subduction zone. The estimated earthquake hazard parameters by Bayesian approach are more stable and reliable with low standard deviations than other approaches, but the technique is more time consuming. In this study, quantiles of functions of distributions of true and apparent magnitudes for future time intervals of 5, 10, 20, 50 and 100 years are calculated with confidence limits for probability levels of 50, 70 and 90 % in all seismogenic source zones. The zones of estimated M max greater than 8.0 are related to the Sulaiman-Kirthar ranges, Hindukush-Pamir Himalaya and Himalayan Frontal Thrusts belt; suggesting more seismically hazardous regions in the examined area. The lowest value of M max (6.44) has been calculated in Northern-Pakistan and Hazara syntaxis zone which have estimated lowest activity rate 0.0023 events/day as compared to other zones. The Himalayan Frontal Thrusts belt exhibits higher earthquake magnitude (8.01) in next 100-years with 90 % probability level as compared to other zones, which reveals that this zone is more vulnerable to occurrence of a great earthquake. The obtained results in this study are directly useful for the probabilistic seismic hazard assessment in the examined region of Himalaya.

  5. Migrating slow slip detected by slow and repeating earthquakes along the Nankai trough, Japan

    NASA Astrophysics Data System (ADS)

    Uchida, N.; Obara, K.; Takagi, R.; Asano, Y.

    2017-12-01

    In the western part of the Nankai trough region, successive occurrences of deep non-volcanic tremors and shallow very low frequency earthquakes (VLFEs) associated with long-term slow slip events (SSEs) are reported in 2003 and 2010. To understand the link between the two seismic slow earthquakes, we identify small repeating earthquake in and around the region from the waveform similarity of earthquakes observed by NIED Hi-net. The result shows the repeaters are located in 15-30 km depth that is in between the depth range of the shallow VLFEs (depth <=15 km) and deep SSEs (depth>= 25km). They are also located outside of the source area of the 1946 Mw8.3 Nankai earthquake, consistent with the hypothesis that repeaters occur due to stress concentration to a locked patch by aseismic slip (creep) in the surrounding area. The long-term trend of aseismic slip estimated from the repeaters shows that the slip rate were faster during 2-3 years period before the 2003 and 2010 episodes. We also found short-term (days to month) accelerations of aseismic slip during the episode of 2010 that migrated toward north. The migration detected from repeaters follows shallow migration of VLFEs and precedes the deep migration of tremors. Therefore we consider that during the period of the long-term SSE of 3 years period, short-term slow slip migrated about 300 km length in 1 month from shallower and south part to deeper and north part of the plate boundary near the edge of the slip area of the Nankai earthquake. Such long-distance migration probably related to large-scale locking of plate boundary that is responsible to the Nankai earthquake and the interseismic stress concentration to the locked area.

  6. Do Earthquakes Shake Stock Markets?

    PubMed Central

    2015-01-01

    This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan. PMID:26197482

  7. Continuing Megathrust Earthquake Potential in northern Chile after the 2014 Iquique Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Herman, M. W.; Barnhart, W. D.; Furlong, K. P.; Riquelme, S.; Benz, H.; Bergman, E.; Barrientos, S. E.; Earle, P. S.; Samsonov, S. V.

    2014-12-01

    The seismic gap theory, which identifies regions of elevated hazard based on a lack of recent seismicity in comparison to other portions of a fault, has successfully explained past earthquakes and is useful for qualitatively describing where future large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which until recently had not ruptured in a megathrust earthquake since a M~8.8 event in 1877. On April 1 2014, a M 8.2 earthquake occurred within this northern Chile seismic gap, offshore of the city of Iquique; the size and spatial extent of the rupture indicate it was not the earthquake that had been anticipated. Here, we present a rapid assessment of the seismotectonics of the March-April 2014 seismic sequence offshore northern Chile, including analyses of earthquake (fore- and aftershock) relocations, moment tensors, finite fault models, moment deficit calculations, and cumulative Coulomb stress transfer calculations over the duration of the sequence. This ensemble of information allows us to place the current sequence within the context of historic seismicity in the region, and to assess areas of remaining and/or elevated hazard. Our results indicate that while accumulated strain has been released for a portion of the northern Chile seismic gap, significant sections have not ruptured in almost 150 years. These observations suggest that large-to-great sized megathrust earthquakes will occur north and south of the 2014 Iquique sequence sooner than might be expected had the 2014 events ruptured the entire seismic gap.

  8. Auto Correlation Analysis of Coda Waves from Local Earthquakes for Detecting Temporal Changes in Shallow Subsurface Structures: the 2011 Tohoku-Oki, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Nakahara, Hisashi

    2015-02-01

    For monitoring temporal changes in subsurface structures I propose to use auto correlation functions of coda waves from local earthquakes recorded at surface receivers, which probably contain more body waves than surface waves. Use of coda waves requires earthquakes resulting in decreased time resolution for monitoring. Nonetheless, it may be possible to monitor subsurface structures in sufficient time resolutions in regions with high seismicity. In studying the 2011 Tohoku-Oki, Japan earthquake (Mw 9.0), for which velocity changes have been previously reported, I try to validate the method. KiK-net stations in northern Honshu are used in this analysis. For each moderate earthquake normalized auto correlation functions of surface records are stacked with respect to time windows in the S-wave coda. Aligning the stacked, normalized auto correlation functions with time, I search for changes in phases arrival times. The phases at lag times of <1 s are studied because changes at shallow depths are focused. Temporal variations in the arrival times are measured at the stations based on the stretching method. Clear phase delays are found to be associated with the mainshock and to gradually recover with time. The amounts of the phase delays are 10 % on average with the maximum of about 50 % at some stations. The deconvolution analysis using surface and subsurface records at the same stations is conducted for validation. The results show the phase delays from the deconvolution analysis are slightly smaller than those from the auto correlation analysis, which implies that the phases on the auto correlations are caused by larger velocity changes at shallower depths. The auto correlation analysis seems to have an accuracy of about several percent, which is much larger than methods using earthquake doublets and borehole array data. So this analysis might be applicable in detecting larger changes. In spite of these disadvantages, this analysis is still attractive because it can

  9. Developmental Trajectories and Predictors of Prosocial Behavior Among Adolescents Exposed to the 2008 Wenchuan Earthquake.

    PubMed

    Qin, Yanyun; Zhou, Ya; Fan, Fang; Chen, Shijian; Huang, Rong; Cai, Rouna; Peng, Ting

    2016-02-01

    This longitudinal study examined the developmental trajectories of prosocial behavior and related predictors among adolescents exposed to the 2008 Wenchuan earthquake. At 6-, 18-, and 30-months postearthquake, we followed a sample of 1,573 adolescents. Self-report measures were used to assess earthquake exposure, postearthquake negative life events, prosocial behavior, symptoms of posttraumatic stress disorder, depression, anxiety, social support, and coping style. Data were analyzed using growth mixture modeling and multinomial logistic regressions. Four trajectories of postearthquake prosocial behavior were identified in the sample: (a) high/enhancing (35.0%), (b) high/stable (29.4%), (c) low/declining (33.6%), and (d) low/steeply declining (2.0%). Female gender, more social support, and greater positive coping were significant factors related to a higher probability of developing the high/enhancing trajectory. These findings may be helpful for us to identify adolescents with poor prosocial behavior after exposure to earthquakes so as to provide them with appropriate intervention. Copyright © 2016 International Society for Traumatic Stress Studies.

  10. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  11. Megacity Megaquakes: Two Near-misses, and the Clues they Leave for Earthquake Interaction

    NASA Astrophysics Data System (ADS)

    Shapiro, S. A.; Krüger, O.; Dinske, C.; Langenbruch, C.

    2011-12-01

    Two recent earthquakes left their mark on cities lying well beyond the mainshock rupture zones, raising questions of their future vulnerability, and about earthquake interaction broadly. The 27 February 2010 M=8.8 Maule earthquake struck the Chilean coast, killing 550 people. Chile's capital of Santiago lies 400 km from the high-slip portion of the rupture, and 100 km beyond its edge. The 11 March 2011 M=9.0 Tohoku oki earthquake struck the coast of Japan, its massive tsunami claiming most of its 18,564 victims. Reminiscent of Santiago, Japan's capital of Tokyo lies 400 km from the high-slip portion of the rupture, and 100 km beyond its edge. Because of this distance, both cities largely escaped damage. But it may not have been a clean get-away: The rate of small shocks beneath each city jumped by a factor of about 10 immediately after its megaquake. At Santiago, the quake rate remains two times higher today than it was before the Maule shock; at Tokyo it is three times higher. What this higher rate of moderate (M<6) quakes portends for the likelihood of large ones is difficult--but imperative--to answer, as Tokyo and Santiago are probably just the most striking cases of a common phenomenon: Seismicity increases well beyond the rupture zone, as also seen in the 1999 Izmit-Düzce and 2010 Darfield-Christchurch sequences. Are the Tokyo and Santiago earthquakes, 100 km from the fault rupture, aftershocks? The seismicity beneath Santiago is occurring on the adjacent unruptured section of the Chile-Peru trench megathrust, whereas shocks beneath Tokyo illuminate a deeper, separate fault system. In both cases, the rate of shocks underwent an Omori decay, although the decay ceased beneath Tokyo about a year after the mainshock. Coulomb calculations suggest that the stress imparted by the nearby megaquakes brought the faults beneath Santiago and Tokyo closer to failure (Lorito et al, Nature Geoscience 2010; Toda and Stein, GRL 2013). So, they are aftershocks in the sense

  12. Megacity Megaquakes: Two Near-misses, and the Clues they Leave for Earthquake Interaction

    NASA Astrophysics Data System (ADS)

    Stein, R. S.; Toda, S.

    2013-12-01

    Two recent earthquakes left their mark on cities lying well beyond the mainshock rupture zones, raising questions of their future vulnerability, and about earthquake interaction broadly. The 27 February 2010 M=8.8 Maule earthquake struck the Chilean coast, killing 550 people. Chile's capital of Santiago lies 400 km from the high-slip portion of the rupture, and 100 km beyond its edge. The 11 March 2011 M=9.0 Tohoku oki earthquake struck the coast of Japan, its massive tsunami claiming most of its 18,564 victims. Reminiscent of Santiago, Japan's capital of Tokyo lies 400 km from the high-slip portion of the rupture, and 100 km beyond its edge. Because of this distance, both cities largely escaped damage. But it may not have been a clean get-away: The rate of small shocks beneath each city jumped by a factor of about 10 immediately after its megaquake. At Santiago, the quake rate remains two times higher today than it was before the Maule shock; at Tokyo it is three times higher. What this higher rate of moderate (M<6) quakes portends for the likelihood of large ones is difficult--but imperative--to answer, as Tokyo and Santiago are probably just the most striking cases of a common phenomenon: Seismicity increases well beyond the rupture zone, as also seen in the 1999 Izmit-Düzce and 2010 Darfield-Christchurch sequences. Are the Tokyo and Santiago earthquakes, 100 km from the fault rupture, aftershocks? The seismicity beneath Santiago is occurring on the adjacent unruptured section of the Chile-Peru trench megathrust, whereas shocks beneath Tokyo illuminate a deeper, separate fault system. In both cases, the rate of shocks underwent an Omori decay, although the decay ceased beneath Tokyo about a year after the mainshock. Coulomb calculations suggest that the stress imparted by the nearby megaquakes brought the faults beneath Santiago and Tokyo closer to failure (Lorito et al, Nature Geoscience 2010; Toda and Stein, GRL 2013). So, they are aftershocks in the sense

  13. Earthquakes in Mississippi and vicinity 1811-2010

    USGS Publications Warehouse

    Dart, Richard L.; Bograd, Michael B.E.

    2011-01-01

    This map summarizes two centuries of earthquake activity in Mississippi. Work on the Mississippi map was done in collaboration with the Mississippi Department of Environmental Quality, Office of Geology. The earthquake data plotted on the map are from several sources: the Mississippi Department of Environmental Quality, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Arkansas Geological Survey. In addition to earthquake locations, other materials include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Mississippi and parts of adjacent States. Mississippi has undergone a number of felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Mississippi and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.

  14. Overestimation of the earthquake hazard along the Himalaya: constraints in bracketing of medieval earthquakes from paleoseismic studies

    NASA Astrophysics Data System (ADS)

    Arora, Shreya; Malik, Javed N.

    2017-12-01

    The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

  15. Simulating Earthquake Early Warning Systems in the Classroom as a New Approach to Teaching Earthquakes

    NASA Astrophysics Data System (ADS)

    D'Alessio, M. A.

    2010-12-01

    A discussion of P- and S-waves seems an ubiquitous part of studying earthquakes in the classroom. Textbooks from middle school through university level typically define the differences between the waves and illustrate the sense of motion. While many students successfully memorize the differences between wave types (often utilizing the first letter as a memory aide), textbooks rarely give tangible examples of how the two waves would "feel" to a person sitting on the ground. One reason for introducing the wave types is to explain how to calculate earthquake epicenters using seismograms and travel time charts -- very abstract representations of earthquakes. Even when the skill is mastered using paper-and-pencil activities or one of the excellent online interactive versions, locating an epicenter simply does not excite many of our students because it evokes little emotional impact, even in students located in earthquake-prone areas. Despite these limitations, huge numbers of students are mandated to complete the task. At the K-12 level, California requires that all students be able to locate earthquake epicenters in Grade 6; in New York, the skill is a required part of the Regent's Examination. Recent innovations in earthquake early warning systems around the globe give us the opportunity to address the same content standard, but with substantially more emotional impact on students. I outline a lesson about earthquakes focused on earthquake early warning systems. The introductory activities include video clips of actual earthquakes and emphasize the differences between the way P- and S-waves feel when they arrive (P arrives first, but is weaker). I include an introduction to the principle behind earthquake early warning (including a summary of possible uses of a few seconds warning about strong shaking) and show examples from Japan. Students go outdoors to simulate P-waves, S-waves, and occupants of two different cities who are talking to one another on cell phones

  16. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  17. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes

  18. Discrimination of Man-Made Events and Tectonic Earthquakes in Utah Using Data Recorded at Local Distances

    NASA Astrophysics Data System (ADS)

    Tibi, R.; Young, C. J.; Koper, K. D.; Pankow, K. L.

    2017-12-01

    Seismic event discrimination methods exploit the differing characteristics—in terms of amplitude and/or frequency content—of the generated seismic phases among the event types to be classified. Most of the commonly used seismic discrimination methods are designed for regional data recorded at distances of about 200 to 2000 km. Relatively little attention has focused on discriminants for local distances (< 200 km), the range at which the smallest events are recorded. Short-period fundamental mode Rayleigh waves (Rg) are commonly observed on seismograms of man-made seismic events, and shallow, naturally occurring tectonic earthquakes recorded at local distances. We leverage the well-known notion that Rg amplitude decreases dramatically with increasing event depth to propose a new depth discriminant based on Rg-to-Sg spectral amplitude ratios. The approach is successfully used to discriminate shallow events from deeper tectonic earthquakes in the Utah region recorded at local distances (< 150 km) by the University of Utah Seismographic Stations (UUSS) regional seismic network. Using Mood's median test, we obtained probabilities of nearly zero that the median Rg-to-Sg spectral amplitude ratios are the same between shallow events on one side (including both shallow tectonic earthquakes and man-made events), and deeper earthquakes on the other side, suggesting that there is a statistically significant difference in the estimated Rg-to-Sg ratios between the two populations. We also observed consistent disparities between the different types of shallow events (e.g., explosions vs. mining-induced events), implying that it may be possible to separate the sub-populations that make up this group. This suggests that using local distance Rg-to-Sg spectral amplitude ratios one can not only discriminate shallow from deeper events, but may also be able to discriminate different populations of shallow events. We also experimented with Pg-to-Sg amplitude ratios in multi

  19. Have recent earthquakes exposed flaws in or misunderstandings of probabilistic seismic hazard analysis?

    USGS Publications Warehouse

    Hanks, Thomas C.; Beroza, Gregory C.; Toda, Shinji

    2012-01-01

    In a recent Opinion piece in these pages, Stein et al. (2011) offer a remarkable indictment of the methods, models, and results of probabilistic seismic hazard analysis (PSHA). The principal object of their concern is the PSHA map for Japan released by the Japan Headquarters for Earthquake Research Promotion (HERP), which is reproduced by Stein et al. (2011) as their Figure 1 and also here as our Figure 1. It shows the probability of exceedance (also referred to as the “hazard”) of the Japan Meteorological Agency (JMA) intensity 6–lower (JMA 6–) in Japan for the 30-year period beginning in January 2010. JMA 6– is an earthquake-damage intensity measure that is associated with fairly strong ground motion that can be damaging to well-built structures and is potentially destructive to poor construction (HERP, 2005, appendix 5). Reiterating Geller (2011, p. 408), Stein et al. (2011, p. 623) have this to say about Figure 1: The regions assessed as most dangerous are the zones of three hypothetical “scenario earthquakes” (Tokai, Tonankai, and Nankai; see map). However, since 1979, earthquakes that caused 10 or more fatalities in Japan actually occurred in places assigned a relatively low probability. This discrepancy—the latest in a string of negative results for the characteristic model and its cousin the seismic-gap model—strongly suggest that the hazard map and the methods used to produce it are flawed and should be discarded. Given the central role that PSHA now plays in seismic risk analysis, performance-based engineering, and design-basis ground motions, discarding PSHA would have important consequences. We are not persuaded by the arguments of Geller (2011) and Stein et al. (2011) for doing so because important misunderstandings about PSHA seem to have conditioned them. In the quotation above, for example, they have confused important differences between earthquake-occurrence observations and ground-motion hazard calculations.

  20. Triggering of the Ms = 5.4 Little Skull Mountain, Nevada, earthquake with dynamic strains

    USGS Publications Warehouse

    Gomberg, Joan; Bodin, Paul

    1994-01-01

    We have developed an approach to test the viability of dynamic strains as a triggering mechanism by quantifying the dynamic strain tensor at seismogenic depths. We focus on the dynamic strains at the hypocenter of the Ms = 5.4 Little Skull Mountain (LSM), Nevada, earthquake. This event is noteworthy because it is the largest earthquake demonstrably triggered at remote distances (∼280 km) by the Ms = 7.4 Landers, California, earthquake and because of its ambiguous association with magmatic activity. Our analysis shows that, if dynamic strains initiate remote triggering, the orientation and modes of faulting most favorable for being triggered by a given strain transient change with depth. The geometry of the most probable LSM fault plane was favorably oriented with respect to the geometry of the dynamic strain tensor. We estimate that the magnitude of the peak dynamic strains at the hypocentral depth of the LSM earthquake were ∼4 μstrain (∼.2 MPa) which are ∼50% smaller than those estimated from velocity seismograms recorded at the surface. We suggest that these strains are too small to cause Mohr-Coulomb style failure unless the fault was prestrained to near failure levels, the fault was exceptionally weak, and/or the dynamic strains trigger other processes that lead to failure.