Sample records for earthquake forecast testing

  1. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability

  2. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  3. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  4. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  5. Prospective testing of Coulomb short-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  6. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  7. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  8. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  9. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  10. Retrospective Evaluation of the Long-Term CSEP-Italy Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.

    2010-12-01

    On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We considered the twelve time-independent earthquake forecasts among this set and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. Here, we present the results of tests that measure the consistency of the forecasts with the past observations. Besides being an evaluation of the submitted time-independent forecasts, this exercise provided insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between the robustness of results and experiment duration.

  11. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event

  12. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of

  13. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  14. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  15. Interevent times in a new alarm-based earthquake forecasting model

    NASA Astrophysics Data System (ADS)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  16. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.

  17. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  18. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  19. Application of a long-range forecasting model to earthquakes in the Japan mainland testing region

    NASA Astrophysics Data System (ADS)

    Rhoades, David A.

    2011-03-01

    The Every Earthquake a Precursor According to Scale (EEPAS) model is a long-range forecasting method which has been previously applied to a number of regions, including Japan. The Collaboratory for the Study of Earthquake Predictability (CSEP) forecasting experiment in Japan provides an opportunity to test the model at lower magnitudes than previously and to compare it with other competing models. The model sums contributions to the rate density from past earthquakes based on predictive scaling relations derived from the precursory scale increase phenomenon. Two features of the earthquake catalogue in the Japan mainland region create difficulties in applying the model, namely magnitude-dependence in the proportion of aftershocks and in the Gutenberg-Richter b-value. To accommodate these features, the model was fitted separately to earthquakes in three different target magnitude classes over the period 2000-2009. There are some substantial unexplained differences in parameters between classes, but the time and magnitude distributions of the individual earthquake contributions are such that the model is suitable for three-month testing at M ≥ 4 and for one-year testing at M ≥ 5. In retrospective analyses, the mean probability gain of the EEPAS model over a spatially smoothed seismicity model increases with magnitude. The same trend is expected in prospective testing. The Proximity to Past Earthquakes (PPE) model has been submitted to the same testing classes as the EEPAS model. Its role is that of a spatially-smoothed reference model, against which the performance of time-varying models can be compared.

  20. Short- and Long-Term Earthquake Forecasts Based on Statistical Models

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner

    2017-04-01

    The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.

  1. Statistical earthquake focal mechanism forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2014-04-01

    Forecasts of the focal mechanisms of future shallow (depth 0-70 km) earthquakes are important for seismic hazard estimates and Coulomb stress, and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor earthquake catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow earthquakes are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of earthquakes included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.

  2. Measuring the effectiveness of earthquake forecasting in insurance strategies

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Muir-Wood, R.

    2009-04-01

    Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.

  3. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  4. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  5. Prospective Evaluation of the Global Earthquake Activity Rate Model (GEAR1) Earthquake Forecast: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schorlemmer, Danijel; Beutin, Thomas

    2017-04-01

    The Global Earthquake Activity Rate Model (GEAR1) is a hybrid seismicity model, constructed from a loglinear combination of smoothed seismicity from the Global Centroid Moment Tensor (CMT) earthquake catalog and geodetic strain rates (Global Strain Rate Map, version 2.1). For the 2005-2012 retrospective evaluation period, GEAR1 outperformed both parent strain rate and smoothed seismicity forecasts. Since 1. October 2015, GEAR1 has been prospectively evaluated by the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center. Here, we present initial one-year test results of the GEAR1, GSRM and GSRM2.1, as well as localized evaluation of GEAR1 performance. The models were evaluated on the consistency in number (N-test), spatial (S-test) and magnitude (M-test) distribution of forecasted and observed earthquakes, as well as overall data consistency (CL-, L-tests). Performance at target earthquake locations was compared between models using the classical paired T-test and its non-parametric equivalent, the W-test, to determine if one model could be rejected in favor of another at the 0.05 significance level. For the evaluation period from 1. October 2015 to 1. October 2016, the GEAR1, GSRM and GSRM2.1 forecasts pass all CSEP likelihood tests. Comparative test results show statistically significant improvement of GEAR1 performance over both strain rate-based forecasts, both of which can be rejected in favor of GEAR1. Using point process residual analysis, we investigate the spatial distribution of differences in GEAR1, GSRM and GSRM2 model performance, to identify regions where the GEAR1 model should be adjusted, that could not be inferred from CSEP test results. Furthermore, we investigate whether the optimal combination of smoothed seismicity and strain rates remains stable over space and time.

  6. Earthquake forecasts for the CSEP Japan experiment based on the RI algorithm

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.

    2011-03-01

    An earthquake forecast testing experiment for Japan, the first of its kind, is underway within the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) under a controlled environment. Here we give an overview of the earthquake forecast models, based on the RI algorithm, which we have submitted to the CSEP Japan experiment. Models have been submitted to a total of 9 categories, corresponding to 3 testing classes (3 years, 1 year, and 3 months) and 3 testing regions. The RI algorithm is originally a binary forecast system based on the working assumption that large earthquakes are more likely to occur in the future at locations of higher seismicity in the past. It is based on simple counts of the number of past earthquakes, which is called the Relative Intensity (RI) of seismicity. To improve its forecast performance, we first expand the RI algorithm by introducing spatial smoothing. We then convert the RI representation from a binary system to a CSEP-testable model that produces forecasts for the number of earthquakes of predefined magnitudes. We use information on past seismicity to tune the parameters. The final submittal consists of 36 executable computer codes: 4 variants corresponding to different smoothing parameters for each of the 9 categories. They will help to elucidate which categories and which smoothing parameters are the most meaningful for the RI hypothesis. The main purpose of our participation in the experiment is to better understand the significance of the relative intensity of seismicity for earthquake forecastability in Japan.

  7. Retrospective stress-forecasting of earthquakes

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Crampin, Stuart

    2015-04-01

    Observations of changes in azimuthally varying shear-wave splitting (SWS) above swarms of small earthquakes monitor stress-induced changes to the stress-aligned vertical microcracks pervading the upper crust, lower crust, and uppermost ~400km of the mantle. (The microcracks are intergranular films of hydrolysed melt in the mantle.) Earthquakes release stress, and an appropriate amount of stress for the relevant magnitude must accumulate before each event. Iceland is on an extension of the Mid-Atlantic Ridge, where two transform zones, uniquely run onshore. These onshore transform zones provide semi-continuous swarms of small earthquakes, which are the only place worldwide where SWS can be routinely monitored. Elsewhere SWS must be monitored above temporally-active occasional swarms of small earthquakes, or in infrequent SKS and other teleseismic reflections from the mantle. Observations of changes in SWS time-delays are attributed to stress-induced changes in crack aspect-ratios allowing stress-accumulation and stress-relaxation to be identified. Monitoring SWS in SW Iceland in 1988, stress-accumulation before an impending earthquake was recognised and emails were exchanged between the University of Edinburgh (EU) and the Iceland Meteorological Office (IMO). On 10th November 1988, EU emailed IMO that a M5 earthquake could occur soon on a seismically-active fault plane where seismicity was still continuing following a M5.1 earthquake six-months earlier. Three-days later, IMO emailed EU that a M5 earthquake had just occurred on the specified fault-plane. We suggest this is a successful earthquake stress-forecast, where we refer to the procedure as stress-forecasting earthquakes as opposed to predicting or forecasting to emphasise the different formalism. Lack of funds has prevented us monitoring SWS on Iceland seismograms, however, we have identified similar characteristic behaviour of SWS time-delays above swarms of small earthquakes which have enabled us to

  8. Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection

    2011-12-01

    for reliability and skill by retrospective testing, and the models should be under continuous prospective testing against long-term forecasts and alternative time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in probabilistic seismic hazard analysis. (e) Alert procedures should be standardized to facilitate decisions at different levels of government, based in part on objective analysis of costs and benefits. (f) In establishing alert protocols, consideration should also be given to the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that lead to informal predictions and misinformation. Formal OEF procedures based on probabilistic forecasting appropriately separate hazard estimation by scientists from the decision-making role of civil protection authorities. The prosecution of seven Italian scientists on manslaughter charges stemming from their actions before the L'Aquila earthquake makes clear why this separation should be explicit in defining OEF protocols.

  9. Purposes and methods of scoring earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2010-12-01

    There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.

  10. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  11. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence.

    PubMed

    Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe

    2017-09-01

    Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016-2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences.

  12. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence

    PubMed Central

    Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe

    2017-01-01

    Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016–2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences. PMID:28924610

  13. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  14. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  15. Earthquake focal mechanism forecasting in Italy for PSHA purposes

    NASA Astrophysics Data System (ADS)

    Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola

    2018-01-01

    In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.

  16. Prospectively Evaluating the Collaboratory for the Study of Earthquake Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria

    2016-04-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP

  17. The Value, Protocols, and Scientific Ethics of Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, Thomas H.

    2013-04-01

    Earthquakes are different from other common natural hazards because precursory signals diagnostic of the magnitude, location, and time of impending seismic events have not yet been found. Consequently, the short-term, localized prediction of large earthquakes at high probabilities with low error rates (false alarms and failures-to-predict) is not yet feasible. An alternative is short-term probabilistic forecasting based on empirical statistical models of seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains up to 1000 relative to long-term forecasts. The value of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing operational forecasting protocols in this sort of "low-probability environment." This paper will explore the complex interrelations among the valuation of low-probability earthquake forecasting, which must account for social intangibles; the protocols of operational forecasting, which must factor in large uncertainties; and the ethics that guide scientists as participants in the forecasting process, who must honor scientific principles without doing harm. Earthquake forecasts possess no intrinsic societal value; rather, they acquire value through their ability to influence decisions made by users seeking to mitigate seismic risk and improve community resilience to earthquake disasters. According to the recommendations of the International Commission on Earthquake Forecasting (www.annalsofgeophysics.eu/index.php/annals/article/view/5350), operational forecasting systems should appropriately separate the hazard-estimation role of scientists from the decision-making role of civil protection authorities and individuals. They should

  18. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  19. Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre

    NASA Astrophysics Data System (ADS)

    Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip

    2013-04-01

    Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in

  20. Operational Earthquake Forecasting of Aftershocks for New England

    NASA Astrophysics Data System (ADS)

    Ebel, J.; Fadugba, O. I.

    2015-12-01

    Although the forecasting of mainshocks is not possible, recent research demonstrates that probabilistic forecasts of expected aftershock activity following moderate and strong earthquakes is possible. Previous work has shown that aftershock sequences in intraplate regions behave similarly to those in California, and thus the operational aftershocks forecasting methods that are currently employed in California can be adopted for use in areas of the eastern U.S. such as New England. In our application, immediately after a felt earthquake in New England, a forecast of expected aftershock activity for the next 7 days will be generated based on a generic aftershock activity model. Approximately 24 hours after the mainshock, the parameters of the aftershock model will be updated using the observed aftershock activity observed to that point in time, and a new forecast of expected aftershock activity for the next 7 days will be issued. The forecast will estimate the average number of weak, felt aftershocks and the average expected number of aftershocks based on the aftershock statistics of past New England earthquakes. The forecast also will estimate the probability that an earthquake that is stronger than the mainshock will take place during the next 7 days. The aftershock forecast will specify the expected aftershocks locations as well as the areas over which aftershocks of different magnitudes could be felt. The system will use web pages, email and text messages to distribute the aftershock forecasts. For protracted aftershock sequences, new forecasts will be issued on a regular basis, such as weekly. Initially, the distribution system of the aftershock forecasts will be limited, but later it will be expanded as experience with and confidence in the system grows.

  1. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  2. From integrated observation of pre-earthquake signals towards physical-based forecasting: A prospective test experiment

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Tramutoli, V.; Lee, L.; Liu, J. G.; Hattori, K.; Kafatos, M.

    2013-12-01

    We are conducting an integrated study involving multi-parameter observations over different seismo- tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters namely: gas discharge; thermal infrared radiation; ionospheric electron concentration; and atmospheric temperature and humidity, which we suppose are associated with earthquake preparation phase. We intended to test in prospective mode the set of geophysical measurements for different regions of active earthquakes and volcanoes. In 2012-13 we established a collaborative framework with the leading projects PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) for coordinate measurements and prospective validation over seven test regions: Southern California (USA), Eastern Honshu (Japan), Italy, Turkey, Greece, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a 'stress test' opportunity to validate the physical based approach in teal -time over regions of high seismicity. Our initial results are: (1) Prospective tests have shown the presence in real time of anomalies in the atmosphere before most of the significant (M>5.5) earthquakes in all regions; (2) False positive rate alarm is different for each region and varying between 50% (Italy, Kamchatka and California) to 25% (Taiwan and Japan) with a significant reduction of false positives when at least two parameters are contemporary used; (3) One of most complex problem, which is still open, was the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that the physical based short-term forecast is feasible and more tests are needed. We discus the physical concept we used, the future integration of data observations and related developments.

  3. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock

  4. Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie

    2014-05-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.

  5. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  6. International Aftershock Forecasting: Lessons from the Gorkha Earthquake

    NASA Astrophysics Data System (ADS)

    Michael, A. J.; Blanpied, M. L.; Brady, S. R.; van der Elst, N.; Hardebeck, J.; Mayberry, G. C.; Page, M. T.; Smoczyk, G. M.; Wein, A. M.

    2015-12-01

    Following the M7.8 Gorhka, Nepal, earthquake of April 25, 2015 the USGS issued a series of aftershock forecasts. The initial impetus for these forecasts was a request from the USAID Office of US Foreign Disaster Assistance to support their Disaster Assistance Response Team (DART) which coordinated US Government disaster response, including search and rescue, with the Government of Nepal. Because of the possible utility of the forecasts to people in the region and other response teams, the USGS released these forecasts publicly through the USGS Earthquake Program web site. The initial forecast used the Reasenberg and Jones (Science, 1989) model with generic parameters developed for active deep continental regions based on the Garcia et al. (BSSA, 2012) tectonic regionalization. These were then updated to reflect a lower productivity and higher decay rate based on the observed aftershocks, although relying on teleseismic observations, with a high magnitude-of-completeness, limited the amount of data. After the 12 May M7.3 aftershock, the forecasts used an Epidemic Type Aftershock Sequence model to better characterize the multiple sources of earthquake clustering. This model provided better estimates of aftershock uncertainty. These forecast messages were crafted based on lessons learned from the Christchurch earthquake along with input from the U.S. Embassy staff in Kathmandu. Challenges included how to balance simple messaging with forecasts over a variety of time periods (week, month, and year), whether to characterize probabilities with words such as those suggested by the IPCC (IPCC, 2010), how to word the messages in a way that would translate accurately into Nepali and not alarm the public, and how to present the probabilities of unlikely but possible large and potentially damaging aftershocks, such as the M7.3 event, which had an estimated probability of only 1-in-200 for the week in which it occurred.

  7. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  8. Lessons of L'Aquila for Operational Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2012-12-01

    The L'Aquila earthquake of 6 Apr 2009 (magnitude 6.3) killed 309 people and left tens of thousands homeless. The mainshock was preceded by a vigorous seismic sequence that prompted informal earthquake predictions and evacuations. In an attempt to calm the population, the Italian Department of Civil Protection (DPC) convened its Commission on the Forecasting and Prevention of Major Risk (MRC) in L'Aquila on 31 March 2009 and issued statements about the hazard that were widely received as an "anti-alarm"; i.e., a deterministic prediction that there would not be a major earthquake. On October 23, 2012, a court in L'Aquila convicted the vice-director of DPC and six scientists and engineers who attended the MRC meeting on charges of criminal manslaughter, and it sentenced each to six years in prison. A few weeks after the L'Aquila disaster, the Italian government convened an International Commission on Earthquake Forecasting for Civil Protection (ICEF) with the mandate to assess the status of short-term forecasting methods and to recommend how they should be used in civil protection. The ICEF, which I chaired, issued its findings and recommendations on 2 Oct 2009 and published its final report, "Operational Earthquake Forecasting: Status of Knowledge and Guidelines for Implementation," in Aug 2011 (www.annalsofgeophysics.eu/index.php/annals/article/view/5350). As defined by the Commission, operational earthquake forecasting (OEF) involves two key activities: the continual updating of authoritative information about the future occurrence of potentially damaging earthquakes, and the officially sanctioned dissemination of this information to enhance earthquake preparedness in threatened communities. Among the main lessons of L'Aquila is the need to separate the role of science advisors, whose job is to provide objective information about natural hazards, from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms

  9. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  10. Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Forecasts

    USGS Publications Warehouse

    Harris, Ruth A.

    1998-01-01

    The magnitude (Mw) 6.9 Loma Prieta earthquake struck the San Francisco Bay region of central California at 5:04 p.m. P.d.t. on October 17, 1989, killing 62 people and generating billions of dollars in property damage. Scientists were not surprised by the occurrence of a destructive earthquake in this region and had, in fact, been attempting to forecast the location of the next large earthquake in the San Francisco Bay region for decades. This paper summarizes more than 20 scientifically based forecasts made before the 1989 Loma Prieta earthquake for a large earthquake that might occur in the Loma Prieta area. The forecasts geographically closest to the actual earthquake primarily consisted of right-lateral strike-slip motion on the San Andreas Fault northwest of San Juan Bautista. Several of the forecasts did encompass the magnitude of the actual earthquake, and at least one approximately encompassed the along-strike rupture length. The 1989 Loma Prieta earthquake differed from most of the forecasted events in two ways: (1) it occurred with considerable dip-slip in addition to strike-slip motion, and (2) it was much deeper than expected.

  11. UCERF3: A new earthquake forecast for California's complex fault system

    USGS Publications Warehouse

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  12. Testing new methodologies for short -term earthquake forecasting: Multi-parameters precursors

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Tramutoli, Valerio; Lee, Lou; Liu, Tiger; Hattori, Katsumi; Kafatos, Menas

    2014-05-01

    We are conducting real-time tests involving multi-parameter observations over different seismo-tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters, namely: gas discharge; thermal infrared radiation; ionospheric electron density; and atmospheric temperature and humidity, which we believe are all associated with the earthquake preparation phase. We are testing a methodology capable to produce alerts in advance of major earthquakes (M > 5.5) in different regions of active earthquakes and volcanoes. During 2012-2013 we established a collaborative framework with PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) projects for coordinated measurements and prospective validation over seven testing regions: Southern California (USA), Eastern Honshu (Japan), Italy, Greece, Turkey, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a "stress test" opportunity to validate the physical based earthquake precursor approach over regions of high seismicity. Our initial results are: (1) Real-time tests have shown the presence of anomalies in the atmosphere and ionosphere before most of the significant (M>5.5) earthquakes; (2) False positives exist and ratios are different for each region, varying between 50% for (Southern Italy), 35% (California) down to 25% (Taiwan, Kamchatka and Japan) with a significant reduction of false positives as soon as at least two geophysical parameters are contemporarily used; (3) Main problems remain related to the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that real-time testing of physically based pre-earthquake signals provides a short-term predictive power (in all three important parameters, namely location, time and magnitude) for the occurrence of major earthquakes in the tested regions and this result encourages testing to continue with a more detailed analysis of

  13. Statistical Earthquake Focal Mechanism Forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    The new whole Earth focal mechanism forecast, based on the GCMT catalog, has been created. In the present forecast, the sum of normalized seismic moment tensors within 1000 km radius is calculated and the P- and T-axes for the focal mechanism are evaluated on the basis of the sum. Simultaneously we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms. This average angle shows tectonic complexity of a region and indicates the accuracy of the prediction. The method was originally proposed by Kagan and Jackson (1994, JGR). Recent interest by CSEP and GEM has motivated some improvements, particularly to extend the previous forecast to polar and near-polar regions. The major problem in extending the forecast is the focal mechanism calculation on a spherical surface. In the previous forecast as our average focal mechanism was computed, it was assumed that longitude lines are approximately parallel within 1000 km radius. This is largely accurate in the equatorial and near-equatorial areas. However, when one approaches the 75 degree latitude, the longitude lines are no longer parallel: the bearing (azimuthal) difference at points separated by 1000 km reach about 35 degrees. In most situations a forecast point where we calculate an average focal mechanism is surrounded by earthquakes, so a bias should not be strong due to the difference effect cancellation. But if we move into polar regions, the bearing difference could approach 180 degrees. In a modified program focal mechanisms have been projected on a plane tangent to a sphere at a forecast point. New longitude axes which are parallel in the tangent plane are corrected for the bearing difference. A comparison with the old 75S-75N forecast shows that in equatorial regions the forecasted focal mechanisms are almost the same, and the difference in the forecasted focal mechanisms rotation angle is close to zero. However, though the forecasted focal mechanisms are similar

  14. Report of the International Commission on Earthquake Forecasting for Civil Protection (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    The destructive L’Aquila earthquake of 6 April 2009 (Mw 6.3) illustrates the challenges of operational earthquake forecasting. The earthquake ruptured a mapped normal fault in a region identified by long-term forecasting models as one of the most seismically dangerous in Italy; it was the strongest of a rich sequence that started several months earlier and included a M3.9 foreshock less than five hours prior to the mainshock. According to widely circulated news reports, the earthquake had been predicted by a local resident using unpublished radon-based techniques, provoking a public controversy prior to the event that intensified in its wake. Several weeks after the earthquake, the Italian Department of Civil Protection appointed an international commission with the mandate to report on the current state of knowledge of prediction and forecasting and guidelines for operational utilization. The commission included geoscientists from China, France, Germany, Greece, Italy, Japan, Russia, United Kingdom, and United States with experience in earthquake forecasting and prediction. This presentation by the chair of the commission will report on its findings and recommendations.

  15. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  16. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  17. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  18. Earthquake Forecasting Methodology Catalogue - A collection and comparison of the state-of-the-art in earthquake forecasting and prediction methodologies

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2015-04-01

    Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

  19. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  20. An Earthquake Rupture Forecast model for central Italy submitted to CSEP project

    NASA Astrophysics Data System (ADS)

    Pace, B.; Peruzza, L.

    2009-04-01

    We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5earthquakes by Brownian passage time distribution. Beside the original model, updated earthquake rupture forecasts only for individual sources are released too, in the light of recent analyses (Peruzza et al., 2008; Zoeller et al., 2008). We computed forecasts based on the LaSS-CI model for two time-windows: 5 and 10 years. Each model to be tested defines a forecasted earthquake rate in magnitude bins of 0.1 unit steps in the range M5-9, for the periods 1st April 2009 to 1st April 2014, and 1st April 2009 to 1st April 2019. B. Pace, L. Peruzza, G. Lavecchia, and P. Boncio (2006) Layered Seismogenic Source

  1. Time-dependent earthquake forecasting: Method and application to the Italian region

    NASA Astrophysics Data System (ADS)

    Chan, C.; Sorensen, M. B.; Grünthal, G.; Hakimhashemi, A.; Heidbach, O.; Stromeyer, D.; Bosse, C.

    2009-12-01

    We develop a new approach for time-dependent earthquake forecasting and apply it to the Italian region. In our approach, the seismicity density is represented by a bandwidth function as a smoothing Kernel in the neighboring region of earthquakes. To consider the fault-interaction-based forecasting, we calculate the Coulomb stress change imparted by each earthquake in the study area. From this, the change of seismicity rate as a function of time can be estimated by the concept of rate-and-state stress transfer. We apply our approach to the region of Italy and earthquakes that occurred before 2003 to generate the seismicity density. To validate our approach, we compare our estimated seismicity density with the distribution of earthquakes with M≥3.8 after 2004. A positive correlation is found and all of the examined earthquakes locate in the area of the highest 66 percentile of seismicity density in the study region. Furthermore, the seismicity density corresponding to the epicenter of the 2009 April 6, Mw = 6.3, L’Aquila earthquake is in the area of the highest 5 percentile. For the time-dependent seismicity rate change, we estimate the rate-and-state stress transfer imparted by the M≥5.0 earthquakes occurred in the past 50 years. It suggests that the seismicity rate has increased at the locations of 65% of the examined earthquakes. Applying this approach to the L’Aquila sequence by considering seven M≥5.0 aftershocks as well as the main shock, not only spatial but also temporal forecasting of the aftershock distribution is significant.

  2. Earthquake Forecasting Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B. M.; Nava Pichardo, F. A.; Glowacka, E.; Gomez-Trevino, E.

    2015-12-01

    Large earthquakes have semi-periodic behavior as result of critically self-organized processes of stress accumulation and release in some seismogenic region. Thus, large earthquakes in a region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. Nava et al., 2013 and Quinteros et al., 2013 realized that not all earthquakes in a given region need belong to the same sequence, since there can be more than one process of stress accumulation and release in it; they also proposed a method to identify semi-periodic sequences through analytic Fourier analysis. This work presents improvements on the above-mentioned method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification, which means that earthquake occurrence times are treated as a labeled point process; the estimation of appropriate upper limit uncertainties to use in forecasts; and the use of Bayesian analysis to evaluate the forecast performance. This improved method is applied to specific regions: the southwestern coast of Mexico, the northeastern Japan Arc, the San Andreas Fault zone at Parkfield, and northeastern Venezuela.

  3. Retrospective validation of renewal-based, medium-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2013-10-01

    In this paper, some methods for scoring the performances of an earthquake forecasting probability model are applied retrospectively for different goals. The time-dependent occurrence probabilities of a renewal process are tested against earthquakes of Mw ≥ 5.3 recorded in Italy according to decades of the past century. An aim was to check the capability of the model to reproduce the data by which the model was calibrated. The scoring procedures used can be distinguished on the basis of the requirement (or absence) of a reference model and of probability thresholds. Overall, a rank-based score, information gain, gambling scores, indices used in binary predictions and their loss functions are considered. The definition of various probability thresholds as percentages of the hazard functions allows proposals of the values associated with the best forecasting performance as alarm level in procedures for seismic risk mitigation. Some improvements are then made to the input data concerning the completeness of the historical catalogue and the consistency of the composite seismogenic sources with the hypotheses of the probability model. Another purpose of this study was thus to obtain hints on what is the most influential factor and on the suitability of adopting the consequent changes of the data sets. This is achieved by repeating the estimation procedure of the occurrence probabilities and the retrospective validation of the forecasts obtained under the new assumptions. According to the rank-based score, the completeness appears to be the most influential factor, while there are no clear indications of the usefulness of the decomposition of some composite sources, although in some cases, it has led to improvements of the forecast.

  4. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  5. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    USGS Publications Warehouse

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as

  6. Discussion of New Approaches to Medium-Short-Term Earthquake Forecast in Practice of The Earthquake Prediction in Yunnan

    NASA Astrophysics Data System (ADS)

    Hong, F.

    2017-12-01

    After retrospection of years of practice of the earthquake prediction in Yunnan area, it is widely considered that the fixed-point earthquake precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of earthquakes, however it is difficult to obtain the spatial relevance between earthquakes and precursory anomalies, thus we can hardly predict the spatial locations of earthquakes using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predicting earthquakes locations, resulting from the increased seismicity were observed before 80% M=6.0 earthquakes in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predicting earthquakes locations in recent year, for instance, the forecasted earthquakes occurring time and area derived form the 1-year-scale geomagnetic anomalies before the M6.5 Ludian earthquake in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term earthquake forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small earthquakes.

  7. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  8. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-02

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  9. FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.

    USGS Publications Warehouse

    Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.

    1985-01-01

    The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.

  10. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  11. A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.

    2017-01-01

    Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.

  12. Operational Earthquake Forecasting and Earthquake Early Warning: The Challenges of Introducing Scientific Innovations for Public Safety

    NASA Astrophysics Data System (ADS)

    Goltz, J. D.

    2016-12-01

    Although variants of both earthquake early warning and short-term operational earthquake forecasting systems have been implemented or are now being implemented in some regions and nations, they have been slow to gain acceptance within the disciplines that produced them as well as among those for whom they were intended to assist. To accelerate the development and implementation of these technologies will require the cooperation and collaboration of multiple disciplines, some inside and others outside of academia. Seismologists, social scientists, emergency managers, elected officials and key opinion leaders from the media and public must be the participants in this process. Representatives of these groups come from both inside and outside of academia and represent very different organizational cultures, backgrounds and expectations for these systems, sometimes leading to serious disagreements and impediments to further development and implementation. This presentation will focus on examples of the emergence of earthquake early warning and operational earthquake forecasting systems in California, Japan and other regions and document the challenges confronted in the ongoing effort to improve seismic safety.

  13. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  14. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  15. Implications of the 26 December 2004 Sumatra-Andaman earthquake on tsunami forecast and assessment models for great subduction-zone earthquakes

    USGS Publications Warehouse

    Geist, Eric L.; Titov, Vasily V.; Arcas, Diego; Pollitz, Fred F.; Bilek, Susan L.

    2007-01-01

    Results from different tsunami forecasting and hazard assessment models are compared with observed tsunami wave heights from the 26 December 2004 Indian Ocean tsunami. Forecast models are based on initial earthquake information and are used to estimate tsunami wave heights during propagation. An empirical forecast relationship based only on seismic moment provides a close estimate to the observed mean regional and maximum local tsunami runup heights for the 2004 Indian Ocean tsunami but underestimates mean regional tsunami heights at azimuths in line with the tsunami beaming pattern (e.g., Sri Lanka, Thailand). Standard forecast models developed from subfault discretization of earthquake rupture, in which deep- ocean sea level observations are used to constrain slip, are also tested. Forecast models of this type use tsunami time-series measurements at points in the deep ocean. As a proxy for the 2004 Indian Ocean tsunami, a transect of deep-ocean tsunami amplitudes recorded by satellite altimetry is used to constrain slip along four subfaults of the M >9 Sumatra–Andaman earthquake. This proxy model performs well in comparison to observed tsunami wave heights, travel times, and inundation patterns at Banda Aceh. Hypothetical tsunami hazard assessments models based on end- member estimates for average slip and rupture length (Mw 9.0–9.3) are compared with tsunami observations. Using average slip (low end member) and rupture length (high end member) (Mw 9.14) consistent with many seismic, geodetic, and tsunami inversions adequately estimates tsunami runup in most regions, except the extreme runup in the western Aceh province. The high slip that occurred in the southern part of the rupture zone linked to runup in this location is a larger fluctuation than expected from standard stochastic slip models. In addition, excess moment release (∼9%) deduced from geodetic studies in comparison to seismic moment estimates may generate additional tsunami energy, if the

  16. 2017 One‐year seismic‐hazard forecast for the central and eastern United States from induced and natural earthquakes

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Shumway, Allison; McNamara, Daniel E.; Williams, Robert; Llenos, Andrea L.; Ellsworth, William L.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.

    2017-01-01

    We produce a one‐year 2017 seismic‐hazard forecast for the central and eastern United States from induced and natural earthquakes that updates the 2016 one‐year forecast; this map is intended to provide information to the public and to facilitate the development of induced seismicity forecasting models, methods, and data. The 2017 hazard model applies the same methodology and input logic tree as the 2016 forecast, but with an updated earthquake catalog. We also evaluate the 2016 seismic‐hazard forecast to improve future assessments. The 2016 forecast indicated high seismic hazard (greater than 1% probability of potentially damaging ground shaking in one year) in five focus areas: Oklahoma–Kansas, the Raton basin (Colorado/New Mexico border), north Texas, north Arkansas, and the New Madrid Seismic Zone. During 2016, several damaging induced earthquakes occurred in Oklahoma within the highest hazard region of the 2016 forecast; all of the 21 moment magnitude (M) ≥4 and 3 M≥5 earthquakes occurred within the highest hazard area in the 2016 forecast. Outside the Oklahoma–Kansas focus area, two earthquakes with M≥4 occurred near Trinidad, Colorado (in the Raton basin focus area), but no earthquakes with M≥2.7 were observed in the north Texas or north Arkansas focus areas. Several observations of damaging ground‐shaking levels were also recorded in the highest hazard region of Oklahoma. The 2017 forecasted seismic rates are lower in regions of induced activity due to lower rates of earthquakes in 2016 compared with 2015, which may be related to decreased wastewater injection caused by regulatory actions or by a decrease in unconventional oil and gas production. Nevertheless, the 2017 forecasted hazard is still significantly elevated in Oklahoma compared to the hazard calculated from seismicity before 2009.

  17. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as

  18. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when

  19. 2018 one‐year seismic hazard forecast for the central and eastern United States from induced and natural earthquakes

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Rukstales, Kenneth S.; McNamara, Daniel E.; Williams, Robert A.; Shumway, Allison; Powers, Peter; Earle, Paul; Llenos, Andrea L.; Michael, Andrew J.; Rubinstein, Justin L.; Norbeck, Jack; Cochran, Elizabeth S.

    2018-01-01

    This article describes the U.S. Geological Survey (USGS) 2018 one‐year probabilistic seismic hazard forecast for the central and eastern United States from induced and natural earthquakes. For consistency, the updated 2018 forecast is developed using the same probabilistic seismicity‐based methodology as applied in the two previous forecasts. Rates of earthquakes across the United States M≥3.0">M≥3.0 grew rapidly between 2008 and 2015 but have steadily declined over the past 3 years, especially in areas of Oklahoma and southern Kansas where fluid injection has decreased. The seismicity pattern in 2017 was complex with earthquakes more spatially dispersed than in the previous years. Some areas of west‐central Oklahoma experienced increased activity rates where industrial activity increased. Earthquake rates in Oklahoma (429 earthquakes of M≥3">M≥3 and 4 M≥4">M≥4), Raton basin (Colorado/New Mexico border, six earthquakes M≥3">M≥3), and the New Madrid seismic zone (11 earthquakes M≥3">M≥3) continue to be higher than historical levels. Almost all of these earthquakes occurred within the highest hazard regions of the 2017 forecast. Even though rates declined over the past 3 years, the short‐term hazard for damaging ground shaking across much of Oklahoma remains at high levels due to continuing high rates of smaller earthquakes that are still hundreds of times higher than at any time in the state’s history. Fine details and variability between the 2016–2018 forecasts are obscured by significant uncertainties in the input model. These short‐term hazard levels are similar to active regions in California. During 2017, M≥3">M≥3 earthquakes also occurred in or near Ohio, West Virginia, Missouri, Kentucky, Tennessee, Arkansas, Illinois, Oklahoma, Kansas, Colorado, New Mexico, Utah, and Wyoming.

  20. The potential uses of operational earthquake forecasting

    USGS Publications Warehouse

    Field, Edward; Jordan, Thomas; Jones, Lucille M.; Michael, Andrew; Blanpied, Michael L.

    2016-01-01

    This article reports on a workshop held to explore the potential uses of operational earthquake forecasting (OEF). We discuss the current status of OEF in the United States and elsewhere, the types of products that could be generated, the various potential users and uses of OEF, and the need for carefully crafted communication protocols. Although operationalization challenges remain, there was clear consensus among the stakeholders at the workshop that OEF could be useful.

  1. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    NASA Astrophysics Data System (ADS)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  2. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes

  3. Stress-based aftershock forecasts made within 24h post mainshock: Expected north San Francisco Bay area seismicity changes after the 2014M=6.0 West Napa earthquake

    USGS Publications Warehouse

    Parsons, Thomas E.; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Edward; Toda, Shinji; Stein, Ross S.

    2014-01-01

    We calculate stress changes resulting from the M= 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.

  4. Stress-based aftershock forecasts made within 24 h postmain shock: Expected north San Francisco Bay area seismicity changes after the 2014 M = 6.0 West Napa earthquake

    NASA Astrophysics Data System (ADS)

    Parsons, Tom; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Edward; Toda, Shinji; Stein, Ross S.

    2014-12-01

    We calculate stress changes resulting from the M = 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.

  5. Pattern Informatics Approach to Earthquake Forecasting in 3D

    NASA Astrophysics Data System (ADS)

    Toya, Y.; Tiampo, K. F.; Rundle, J. B.; Chen, C.; Li, H.; Klein, W.

    2009-05-01

    Natural seismicity is correlated across multiple spatial and temporal scales, but correlations in seismicity prior to a large earthquake are locally subtle (e.g. seismic quiescence) and often prominent in broad scale (e.g., seismic activation), resulting in local and regional seismicity patterns, e.g. a Mogi's donut. Recognizing that patterns in seismicity rate are reflecting the regional dynamics of the directly unobservable crustal stresses, the Pattern Informatics (PI) approach was introduced by Tiampo et al. in 2002 [Europhys. Lett., 60 (3), 481-487,] Rundle et al., 2002 [PNAS 99, suppl. 1, 2514-2521.] In this study, we expand the PI approach to forecasting earthquakes into the third, or vertical dimension, and illustrate its further improvement in the forecasting performance through case studies of both natural and synthetic data. The PI characterizes rapidly evolving spatio-temporal seismicity patterns as angular drifts of a unit state vector in a high dimensional correlation space, and systematically identifies anomalous shifts in seismic activity with respect to the regional background. 3D PI analysis is particularly advantageous over 2D analysis in resolving vertically overlapped seismicity anomalies in a highly complex tectonic environment. Case studies will help to illustrate some important properties of the PI forecasting tool. [Submitted to: Concurrency and Computation: Practice and Experience, Wiley, Special Issue: ACES2008.

  6. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  7. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  8. Validating induced seismicity forecast models—Induced Seismicity Test Bench

    NASA Astrophysics Data System (ADS)

    Király-Proag, Eszter; Zechar, J. Douglas; Gischig, Valentin; Wiemer, Stefan; Karvounis, Dimitrios; Doetsch, Joseph

    2016-08-01

    Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. In this study, we propose an Induced Seismicity Test Bench to test and rank such models; this test bench can be used for model development, model selection, and ensemble model building. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models: Shapiro and Smoothed Seismicity (SaSS) and Hydraulics and Seismics (HySei). These models incorporate a different mix of physics-based elements and stochastic representation of the induced sequences. Our results show that neither model is fully superior to the other. Generally, HySei forecasts the seismicity rate better after shut-in but is only mediocre at forecasting the spatial distribution. On the other hand, SaSS forecasts the spatial distribution better and gives better seismicity rate estimates before shut-in. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in.

  9. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  10. First Results of the Regional Earthquake Likelihood Models Experiment

    NASA Astrophysics Data System (ADS)

    Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.

    2010-08-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

  11. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

  12. Comparing Foreshock Characteristics and Foreshock Forecasting in Observed and Simulated Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Ogata, Y.

    2014-12-01

    In our previous papers (Ogata et al., 1995, 1996, 2012; GJI), we characterized foreshock activity in Japan, and then presented a model that forecasts the probability that one or more earthquakes form a foreshock sequence; then we tested prospectively foreshock probabilities in the JMA catalog. In this talk, I compare the empirical results with results for synthetic catalogs in order to clarify whether or not these results are consistent with the description of the seismicity by a superposition of background activity and epidemic-type aftershock sequences (ETAS models). This question is important, because it is still controversially discussed whether the nucleation process of large earthquakes is driven by seismically cascading (ETAS-type) or by aseismic accelerating processes. To explore the foreshock characteristics, I firstly applied the same clustering algorithms to real and synthetic catalogs and analyzed the temporal, spatial and magnitude distributions of the selected foreshocks, to find significant differences particularly in the temporal acceleration and magnitude dependence. Finally, I calculated forecast scores based on a single-link cluster algorithm which could be appropriate for real-time applications. I find that the JMA catalog yields higher scores than all synthetic catalogs and that the ETAS models having the same magnitude sequence as the original catalog performs significantly better (more close to the reality) than ETAS-models with randomly picked magnitudes.

  13. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  14. Scientific and non-scientific challenges for Operational Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.

    2015-12-01

    Tracking the time evolution of seismic hazard in time windows shorter than the usual 50-years of long-term hazard models may offer additional opportunities to reduce the seismic risk. This is the target of operational earthquake forecasting (OEF). During the OEF development in Italy we identify several challenges that range from pure science to the more practical interface of science with society. From a scientific point of view, although earthquake clustering is the clearest empirical evidence about earthquake occurrence, and OEF clustering models are the most (successfully) tested hazard models in seismology, we note that some seismologists are still reluctant to accept their scientific reliability. After exploring the motivations of these scientific doubts, we also look into an issue that is often overlooked in this discussion, i.e., in any kind of hazard analysis, we do not use a model because it is the true one, but because it is the better than anything else we can think of. The non-scientific aspects are mostly related to the fact that OEF usually provides weekly probabilities of large eartquakes smaller than 1%. These probabilities are considered by some seismologists too small to be of interest or useful. However, in a recent collaboration with engineers we show that such earthquake probabilities may lead to intolerable individual risk of death. Interestingly, this debate calls for a better definition of the still fuzzy boundaries among the different expertise required for the whole risk mitigation process. The last and probably more pressing challenge is related to the communication to the public. In fact, a wrong message could be useless or even counterproductive. Here we show some progresses that we have made in this field working with communication experts in Italy.

  15. Forecasting Induced Seismicity Using Saltwater Disposal Data and a Hydromechanical Earthquake Nucleation Model

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2017-12-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.

  16. CSEP-Japan: The Japanese node of the collaboratory for the study of earthquake predictability

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2011-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project of earthquake predictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity

  17. Earthquake forecasting studies using radon time series data in Taiwan

    NASA Astrophysics Data System (ADS)

    Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong

    2017-04-01

    For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.

  18. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  19. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  20. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    NASA Astrophysics Data System (ADS)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  1. Hydromechanical Earthquake Nucleation Model Forecasts Onset, Peak, and Falling Rates of Induced Seismicity in Oklahoma and Kansas

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2018-04-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. We develop a reservoir model to calculate the hydrologic conditions associated with the activity of 902 saltwater disposal wells injecting into the Arbuckle aquifer. Estimates of basement fault stressing conditions inform a rate-and-state friction earthquake nucleation model to forecast the seismic response to injection. Our model replicates many salient features of the induced earthquake sequence, including the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. We present evidence for variable time lags between changes in injection and seismicity rates, consistent with the prediction from rate-and-state theory that seismicity rate transients occur over timescales inversely proportional to stressing rate. Given the efficacy of the hydromechanical model, as confirmed through a likelihood statistical test, the results of this study support broader integration of earthquake physics within seismic hazard analysis.

  2. Eruption Forecasting in Alaska: A Retrospective and Test of the Distal VT Model

    NASA Astrophysics Data System (ADS)

    Prejean, S. G.; Pesicek, J. D.; Wellik, J.; Cameron, C.; White, R. A.; McCausland, W. A.; Buurman, H.

    2015-12-01

    United States volcano observatories have successfully forecast most significant US eruptions in the past decade. However, eruptions of some volcanoes remain stubbornly difficult to forecast effectively using seismic data alone. The Alaska Volcano Observatory (AVO) has responded to 28 eruptions from 10 volcanoes since 2005. Eruptions that were not forecast include those of frequently active volcanoes with basaltic-andesite magmas, like Pavlof, Veniaminof, and Okmok volcanoes. In this study we quantify the success rate of eruption forecasting in Alaska and explore common characteristics of eruptions not forecast. In an effort to improve future forecasts, we re-examine seismic data from eruptions and known intrusive episodes in Alaska to test the effectiveness of the distal VT model commonly employed by the USGS-USAID Volcano Disaster Assistance Program (VDAP). In the distal VT model, anomalous brittle failure or volcano-tectonic (VT) earthquake swarms in the shallow crust surrounding the volcano occur as a secondary response to crustal strain induced by magma intrusion. Because the Aleutian volcanic arc is among the most seismically active regions on Earth, distinguishing distal VT earthquake swarms for eruption forecasting purposes from tectonic seismicity unrelated to volcanic processes poses a distinct challenge. In this study, we use a modified beta-statistic to identify pre-eruptive distal VT swarms and establish their statistical significance with respect to long-term background seismicity. This analysis allows us to explore the general applicability of the distal VT model and quantify the likelihood of encountering false positives in eruption forecasting using this model alone.

  3. Short-term earthquake forecasting based on an epidemic clustering model

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2016-04-01

    The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations

  4. 2016 one-year seismic hazard forecast for the Central and Eastern United States from induced and natural earthquakes

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.

    2016-03-28

    The U.S. Geological Survey (USGS) has produced a 1-year seismic hazard forecast for 2016 for the Central and Eastern United States (CEUS) that includes contributions from both induced and natural earthquakes. The model assumes that earthquake rates calculated from several different time windows will remain relatively stationary and can be used to forecast earthquake hazard and damage intensity for the year 2016. This assessment is the first step in developing an operational earthquake forecast for the CEUS, and the analysis could be revised with updated seismicity and model parameters. Consensus input models consider alternative earthquake catalog durations, smoothing parameters, maximum magnitudes, and ground motion estimates, and represent uncertainties in earthquake occurrence and diversity of opinion in the science community. Ground shaking seismic hazard for 1-percent probability of exceedance in 1 year reaches 0.6 g (as a fraction of standard gravity [g]) in northern Oklahoma and southern Kansas, and about 0.2 g in the Raton Basin of Colorado and New Mexico, in central Arkansas, and in north-central Texas near Dallas. Near some areas of active induced earthquakes, hazard is higher than in the 2014 USGS National Seismic Hazard Model (NHSM) by more than a factor of 3; the 2014 NHSM did not consider induced earthquakes. In some areas, previously observed induced earthquakes have stopped, so the seismic hazard reverts back to the 2014 NSHM. Increased seismic activity, whether defined as induced or natural, produces high hazard. Conversion of ground shaking to seismic intensity indicates that some places in Oklahoma, Kansas, Colorado, New Mexico, Texas, and Arkansas may experience damage if the induced seismicity continues unabated. The chance of having Modified Mercalli Intensity (MMI) VI or greater (damaging earthquake shaking) is 5–12 percent per year in north-central Oklahoma and southern Kansas, similar to the chance of damage caused by natural earthquakes

  5. Assessing a 3D smoothed seismicity model of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan

    2016-04-01

    As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.

  6. An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution

    NASA Astrophysics Data System (ADS)

    Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan

    2013-04-01

    The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently

  7. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  8. Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario

    2018-02-01

    Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.

  9. A Bayesian Assessment of Seismic Semi-Periodicity Forecasts

    NASA Astrophysics Data System (ADS)

    Nava, F.; Quinteros, C.; Glowacka, E.; Frez, J.

    2016-01-01

    Among the schemes for earthquake forecasting, the search for semi-periodicity during large earthquakes in a given seismogenic region plays an important role. When considering earthquake forecasts based on semi-periodic sequence identification, the Bayesian formalism is a useful tool for: (1) assessing how well a given earthquake satisfies a previously made forecast; (2) re-evaluating the semi-periodic sequence probability; and (3) testing other prior estimations of the sequence probability. A comparison of Bayesian estimates with updated estimates of semi-periodic sequences that incorporate new data not used in the original estimates shows extremely good agreement, indicating that: (1) the probability that a semi-periodic sequence is not due to chance is an appropriate estimate for the prior sequence probability estimate; and (2) the Bayesian formalism does a very good job of estimating corrected semi-periodicity probabilities, using slightly less data than that used for updated estimates. The Bayesian approach is exemplified explicitly by its application to the Parkfield semi-periodic forecast, and results are given for its application to other forecasts in Japan and Venezuela.

  10. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

  11. Forecast experiment: do temporal and spatial b value variations along the Calaveras fault portend M ≥ 4.0 earthquakes?

    USGS Publications Warehouse

    Parsons, Tom

    2007-01-01

    The power law distribution of earthquake magnitudes and frequencies is a fundamental scaling relationship used for forecasting. However, can its slope (b value) be used on individual faults as a stress indicator? Some have concluded that b values drop just before large shocks. Others suggested that temporally stable low b value zones identify future large-earthquake locations. This study assesses the frequency of b value anomalies portending M ≥ 4.0 shocks versus how often they do not. I investigated M ≥ 4.0 Calaveras fault earthquakes because there have been 25 over the 37-year duration of the instrumental catalog on the most active southern half of the fault. With that relatively large sample, I conducted retrospective time and space earthquake forecasts. I calculated temporal b value changes in 5-km-radius cylindrical volumes of crust that were significant at 90% confidence, but these changes were poor forecasters of M ≥ 4.0 earthquakes. M ≥ 4.0 events were as likely to happen at times of high b values as they were at low ones. However, I could not rule out a hypothesis that spatial b value anomalies portend M ≥ 4.0 events; of 20 M ≥ 4 shocks that could be studied, 6 to 8 (depending on calculation method) occurred where b values were significantly less than the spatial mean, 1 to 2 happened above the mean, and 10 to 13 occurred within 90% confidence intervals of the mean and were thus inconclusive. Thus spatial b value variation might be a useful forecast tool, but resolution is poor, even on seismically active faults.

  12. Near-Field Tsunami Models with Rapid Earthquake Source Inversions from Land and Ocean-Based Observations: The Potential for Forecast and Warning

    NASA Astrophysics Data System (ADS)

    Melgar, D.; Bock, Y.; Crowell, B. W.; Haase, J. S.

    2013-12-01

    Computation of predicted tsunami wave heights and runup in the regions adjacent to large earthquakes immediately after rupture initiation remains a challenging problem. Limitations of traditional seismological instrumentation in the near field which cannot be objectively employed for real-time inversions and the non-unique source inversion results are a major concern for tsunami modelers. Employing near-field seismic, GPS and wave gauge data from the Mw 9.0 2011 Tohoku-oki earthquake, we test the capacity of static finite fault slip models obtained from newly developed algorithms to produce reliable tsunami forecasts. First we demonstrate the ability of seismogeodetic source models determined from combined land-based GPS and strong motion seismometers to forecast near-source tsunamis in ~3 minutes after earthquake origin time (OT). We show that these models, based on land-borne sensors only tend to underestimate the tsunami but are good enough to provide a realistic first warning. We then demonstrate that rapid ingestion of offshore shallow water (100 - 1000 m) wave gauge data significantly improves the model forecasts and possible warnings. We ingest data from 2 near-source ocean-bottom pressure sensors and 6 GPS buoys into the earthquake source inversion process. Tsunami Green functions (tGFs) are generated using the GeoClaw package, a benchmarked finite volume code with adaptive mesh refinement. These tGFs are used for a joint inversion with the land-based data and substantially improve the earthquake source and tsunami forecast. Model skill is assessed by detailed comparisons of the simulation output to 2000+ tsunami runup survey measurements collected after the event. We update the source model and tsunami forecast and warning at 10 min intervals. We show that by 20 min after OT the tsunami is well-predicted with a high variance reduction to the survey data and by ~30 minutes a model that can be considered final, since little changed is observed afterwards, is

  13. Earthquake Testing

    NASA Technical Reports Server (NTRS)

    1979-01-01

    During NASA's Apollo program, it was necessary to subject the mammoth Saturn V launch vehicle to extremely forceful vibrations to assure the moonbooster's structural integrity in flight. Marshall Space Flight Center assigned vibration testing to a contractor, the Scientific Services and Systems Group of Wyle Laboratories, Norco, California. Wyle-3S, as the group is known, built a large facility at Huntsville, Alabama, and equipped it with an enormously forceful shock and vibration system to simulate the liftoff stresses the Saturn V would encounter. Saturn V is no longer in service, but Wyle-3S has found spinoff utility for its vibration facility. It is now being used to simulate earthquake effects on various kinds of equipment, principally equipment intended for use in nuclear power generation. Government regulations require that such equipment demonstrate its ability to survive earthquake conditions. In upper left photo, Wyle3S is preparing to conduct an earthquake test on a 25ton diesel generator built by Atlas Polar Company, Ltd., Toronto, Canada, for emergency use in a Canadian nuclear power plant. Being readied for test in the lower left photo is a large circuit breaker to be used by Duke Power Company, Charlotte, North Carolina. Electro-hydraulic and electro-dynamic shakers in and around the pit simulate earthquake forces.

  14. Risk Management in Earthquakes, Financial Markets, and the Game of 21: The role of Forecasting, Nowcasting, and Timecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.

    2017-12-01

    Earthquakes and financial markets share surprising similarities [1]. For example, the well-known VIX index, which by definition is the implied volatility of the Standard and Poors 500 index, behaves in very similar quantitative fashion to time series for earthquake rates. Both display sudden increases at the time of an earthquake or an announcement of the US Federal Reserve Open Market Committee [2], and both decay as an inverse power of time. Both can be regarded as examples of first order phase transitions [1], and display fractal and scaling behavior associated with critical transitions, such as power-law magnitude-frequency relations in the tails of the distributions. Early quantitative investors such as Edward Thorpe and John Kelly invented novel methods to mitigate or manage risk in games of chance such as blackjack, and in markets using hedging techniques that are still in widespread use today. The basic idea is the concept of proportional betting, where the gambler/investor bets a fraction of the bankroll whose size is determined by the "edge" or inside knowledge of the real (and changing) odds. For earthquake systems, the "edge" over nature can only exist in the form of a forecast (probability of a future earthquake); a nowcast (knowledge of the current state of an earthquake fault system); or a timecast (statistical estimate of the waiting time until the next major earthquake). In our terminology, a forecast is a model, while the nowcast and timecast are analysis methods using observed data only (no model). We also focus on defined geographic areas rather than on faults, thereby eliminating the need to consider specific fault data or fault interactions. Data used are online earthquake catalogs, generally since 1980. Forecasts are based on the Weibull (1952) probability law, and only a handful of parameters are needed. These methods allow the development of real time hazard and risk estimation using cloud-based technologies, and permit the application of

  15. Physics-based and statistical earthquake forecasting in a continental rift zone: the case study of Corinth Gulf (Greece)

    NASA Astrophysics Data System (ADS)

    Segou, Margarita

    2016-01-01

    I perform a retrospective forecast experiment in the most rapid extensive continental rift worldwide, the western Corinth Gulf (wCG, Greece), aiming to predict shallow seismicity (depth <15 km) with magnitude M ≥ 3.0 for the time period between 1995 and 2013. I compare two short-term earthquake clustering models, based on epidemic-type aftershock sequence (ETAS) statistics, four physics-based (CRS) models, combining static stress change estimations and the rate-and-state laboratory law and one hybrid model. For the latter models, I incorporate the stress changes imparted from 31 earthquakes with magnitude M ≥ 4.5 at the extended area of wCG. Special attention is given on the 3-D representation of active faults, acting as potential receiver planes for the estimation of static stress changes. I use reference seismicity between 1990 and 1995, corresponding to the learning phase of physics-based models, and I evaluate the forecasts for six months following the 1995 M = 6.4 Aigio earthquake using log-likelihood performance metrics. For the ETAS realizations, I use seismic events with magnitude M ≥ 2.5 within daily update intervals to enhance their predictive power. For assessing the role of background seismicity, I implement a stochastic reconstruction (aka declustering) aiming to answer whether M > 4.5 earthquakes correspond to spontaneous events and identify, if possible, different triggering characteristics between aftershock sequences and swarm-type seismicity periods. I find that: (1) ETAS models outperform CRS models in most time intervals achieving very low rejection ratio RN = 6 per cent, when I test their efficiency to forecast the total number of events inside the study area, (2) the best rejection ratio for CRS models reaches RN = 17 per cent, when I use varying target depths and receiver plane geometry, (3) 75 per cent of the 1995 Aigio aftershocks that occurred within the first month can be explained by static stress changes, (4) highly variable

  16. The EM Earthquake Precursor

    NASA Astrophysics Data System (ADS)

    Jones, K. B., II; Saxton, P. T.

    2013-12-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two

  17. Prospective earthquake forecasts at the Himalayan Front after the 25 April 2015 M 7.8 Gorkha Mainshock

    USGS Publications Warehouse

    Segou, Margaret; Parsons, Thomas E.

    2016-01-01

    When a major earthquake strikes, the resulting devastation can be compounded or even exceeded by the subsequent cascade of triggered seismicity. As the Nepalese recover from the 25 April 2015 shock, knowledge of what comes next is essential. We calculate the redistribution of crustal stresses and implied earthquake probabilities for different periods, from daily to 30 years into the future. An initial forecast was completed before an M 7.3 earthquake struck on 12 May 2015 that enables a preliminary assessment; postforecast seismicity has so far occurred within a zone of fivefold probability gain. Evaluation of the forecast performance, using two months of seismic data, reveals that stress‐based approaches present improved skill in higher‐magnitude triggered seismicity. Our results suggest that considering the total stress field, rather than only the coseismic one, improves the spatial performance of the model based on the estimation of a wide range of potential triggered faults following a mainshock.

  18. Jumping over the hurdles to effectively communicate the Operational Earthquake Forecast

    NASA Astrophysics Data System (ADS)

    McBride, S.; Wein, A. M.; Becker, J.; Potter, S.; Tilley, E. N.; Gerstenberger, M.; Orchiston, C.; Johnston, D. M.

    2016-12-01

    Probabilities, uncertainties, statistics, science, and threats are notoriously difficult topics to communicate with members of the public. The Operational Earthquake Forecast (OEF) is designed to provide an understanding of potential numbers and sizes of earthquakes and the communication of it must address all of those challenges. Furthermore, there are other barriers to effective communication of the OEF. These barriers include the erosion of trust in scientists and experts, oversaturation of messages, fear and threat messages magnified by the sensalisation of the media, fractured media environments and online echo chambers. Given the complexities and challenges of the OEF, how can we overcome barriers to effective communication? Crisis and risk communication research can inform the development of communication strategies to increase the public understanding and use of the OEF, when applied to the opportunities and challenges of practice. We explore ongoing research regarding how the OEF can be more effectively communicated - including the channels, tools and message composition to engage with a variety of publics. We also draw on past experience and a study of OEF communication during the Canterbury Earthquake Sequence (CES). We demonstrate how research and experience has guided OEF communications during subsequent events in New Zealand, including the M5.7 Valentine's Day earthquake in 2016 (CES), M6.0 Wilberforce earthquake in 2015, and the Cook Strait/Lake Grassmere earthquakes in 2013. We identify the successes and lessons learned of the practical communication of the OEF. Finally, we present future projects and directions in the communication of OEF, informed by both practice and research.

  19. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  20. Prospective Validation of Pre-earthquake Atmospheric Signals and Their Potential for Short–term Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Lee, Lou; Liu, Tiger; Kafatos, Menas

    2015-04-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. Our challenge question is: "Whether such pre-earthquake atmospheric/ionospheric signals are significant and could be useful for early warning of large earthquakes?" To check the predictive potential of atmospheric pre-earthquake signals we have started to validate anomalous ionospheric / atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (STIR), electron concentration in the ionosphere (GPS/TEC), radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 (2) Prospective testing of STIR anomalies with potential for M5.5+ events. The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show STIR anomalous behavior before all of these events with false negatives close to zero. False alarm ratio for false positives is less then 25%. The initial prospective testing for STIR shows systematic appearance of anomalies in advance (1-30 days) to the M5.5+ events for Taiwan, Kamchatka-Sakhalin (Russia) and Japan. Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be

  1. Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast

    NASA Astrophysics Data System (ADS)

    Toda, Shinji; Enescu, Bogdan

    2011-03-01

    Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.

  2. Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench

    NASA Astrophysics Data System (ADS)

    Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan

    2016-04-01

    Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.

  3. Update of the USGS 2016 One-year Seismic Hazard Forecast for the Central and Eastern United States From Induced and Natural Earthquakes

    NASA Astrophysics Data System (ADS)

    Petersen, M. D.; Mueller, C. S.; Moschetti, M. P.; Hoover, S. M.; Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.; Rubinstein, J. L.; McGarr, A.; Rukstales, K. S.

    2016-12-01

    The U.S. Geological Survey released a 2016 one-year forecast for seismic hazard in the central and eastern U.S., which included the influence from both induced and natural earthquakes. This forecast was primarily based on 2015 declustered seismicity rates but also included longer-term rates, 10- and 20- km smoothing distances, earthquakes between Mw 4.7 and maximum magnitudes of 6.0 or 7.1, and 9 alternative ground motion models. Results indicate that areas in Oklahoma, Kansas, Colorado, New Mexico, Arkansas, Texas, and the New Madrid Seismic Zone have a significant chance for damaging ground shaking levels in 2016 (greater than 1% chance of exceeding 0.12 PGA and MMI VI). We evaluate this one-year forecast by considering the earthquakes and ground shaking levels that occurred during the first half of 2016 (earthquakes not included in the forecast). During this period the full catalog records hundreds of events with M ≥ 3.0, but the declustered catalog eliminates most of these dependent earthquakes and results in much lower numbers of earthquakes. The declustered catalog based on USGS COMCAT indicates a M 5.1 earthquake occurred in the zone of highest hazard on the map. Two additional earthquakes of M ≥ 4.0 occurred in Oklahoma, and about 82 earthquakes of M ≥ 3.0 occurred with 77 in Oklahoma and Kansas, 4 in Raton Basin Colorado/New Mexico, and 1 near Cogdell Texas. In addition, 72 earthquakes occurred outside the zones of induced seismicity with more than half in New Madrid and eastern Tennessee. The catalog rates in the first half of 2016 and the corresponding seismic hazard were generally lower than in 2015. For example, the zones for Irving, Venus, and Fashing, Texas; Sun City, Kansas; and north-central Arkansas did not experience any earthquakes with M≥ 2.7 during this period. The full catalog rates were lower by about 30% in Raton Basin and the Oklahoma-Kansas zones but the declustered catalog rates did not drop as much. This decrease in earthquake

  4. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  5. Aftershock Forecasting: Recent Developments and Lessons from the 2016 M5.8 Pawnee, Oklahoma, Earthquake

    NASA Astrophysics Data System (ADS)

    Michael, A. J.; Field, E. H.; Hardebeck, J.; Llenos, A. L.; Milner, K. R.; Page, M. T.; Perry, S. C.; van der Elst, N.; Wein, A. M.

    2016-12-01

    After the Mw 5.8 Pawnee, Oklahoma, earthquake of September 3, 2016 the USGS issued a series of aftershock forecasts for the next month and year. These forecasts were aimed at the emergency response community, those making decisions about well operations in the affected region, and the general public. The forecasts were generated manually using methods planned for automatically released Operational Aftershock Forecasts. The underlying method is from Reasenberg and Jones (Science, 1989) with improvements recently published in Page et al. (BSSA, 2016), implemented in a JAVA Graphical User Interface and presented in a template that is under development. The methodological improvements include initial models based on the tectonic regime as defined by Garcia et al. (BSSA, 2012) and the inclusion of both uncertainty in the clustering parameters and natural random variability. We did not utilize the time-dependent magnitude of completeness model from Page et al. because it applies only to teleseismic events recorded by NEIC. The parameters for Garcia's Generic Active Continental Region underestimated the modified-Omori decay parameter and underestimated the aftershock rate by a factor of 2. And the sequence following the Mw 5.7 Prague, Oklahoma, earthquake of November 6, 2011 was about 3 to 4 times more productive than the Pawnee sequence. The high productivity for these potentially induced sequences is consistent with an increase in productivity in Oklahoma since 2009 (Llenos and Michael, BSSA, 2013) and makes a general tectonic model inapplicable to sequences in this region. Soon after the mainshock occurred, the forecasts relied on the sequence specific parameters. After one month, the Omori decay parameter p is less than one, implying a very long-lived sequence. However, the decay parameter is known to be biased low at early times due to secondary aftershock triggering, and the p-value determined early in the sequence may be inaccurate for long-term forecasting.

  6. Spatial Distribution of the Coefficient of Variation and Bayesian Forecast for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, Shunichi; Ogata, Yosihiko

    2016-04-01

    We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.

  7. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  8. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  9. ViscoSim Earthquake Simulator

    USGS Publications Warehouse

    Pollitz, Fred

    2012-01-01

    Synthetic seismicity simulations have been explored by the Southern California Earthquake Center (SCEC) Earthquake Simulators Group in order to guide long‐term forecasting efforts related to the Unified California Earthquake Rupture Forecast (Tullis et al., 2012a). In this study I describe the viscoelastic earthquake simulator (ViscoSim) of Pollitz, 2009. Recapitulating to a large extent material previously presented by Pollitz (2009, 2011) I describe its implementation of synthetic ruptures and how it differs from other simulators being used by the group.

  10. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  11. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  12. Lessons Learned about Best Practices for Communicating Earthquake Forecasting and Early Warning to Non-Scientific Publics

    NASA Astrophysics Data System (ADS)

    Sellnow, D. D.; Sellnow, T. L.

    2017-12-01

    Earthquake scientists are without doubt experts in understanding earthquake probabilities, magnitudes, and intensities, as well as the potential consequences of them to community infrastructures and inhabitants. One critical challenge these scientific experts face, however, rests with communicating what they know to the people they want to help. Helping scientists translate scientific information to non-scientists is something Drs. Tim and Deanna Sellnow have been committed to for decades. As such, they have compiled a host of data-driven best practices for communicating effectively to non-scientific publics about earthquake forecasting, probabilities, and warnings. In this session, they will summarize what they have learned as it may help earthquake scientists, emergency managers, and other key spokespersons share these important messages to disparate publics in ways that result in positive outcomes, the most important of which is saving lives.

  13. Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer

    USGS Publications Warehouse

    Toda, S.; Stein, R.S.; Richards-Dinger, K.; Bozkurt, S.B.

    2005-01-01

    We develop a forecast model to reproduce the distibution of main shocks, aftershocks and surrounding seismicity observed during 1986-200 in a 300 ?? 310 km area centered on the 1992 M = 7.3 Landers earthquake. To parse the catalog into frames with equal numbers of aftershocks, we animate seismicity in log time increments that lengthen after each main shock; this reveals aftershock zone migration, expansion, and densification. We implement a rate/state algorithm that incorporates the static stress transferred by each M ??? 6 shock and then evolves. Coulomb stress changes amplify the background seismicity, so small stress changes produce large changes in seismicity rate in areas of high background seismicity. Similarly, seismicity rate declines in the stress shadows are evident only in areas with previously high seismicity rates. Thus a key constituent of the model is the background seismicity rate, which we smooth from 1981 to 1986 seismicity. The mean correlation coefficient between observed and predicted M ??? 1.4 shocks (the minimum magnitude of completeness) is 0.52 for 1986-2003 and 0.63 for 1992-2003; a control standard aftershock model yields 0.54 and 0.52 for the same periods. Four M ??? 6.0 shocks struck during the test period; three are located at sites where the expected seismicity rate falls above the 92 percentile, and one is located above the 75 percentile. The model thus reproduces much, but certainly not all, of the observed spatial and temporal seismicity, from which we infer that the decaying effect of stress transferred by successive main shocks influences seismicity for decades. Finally, we offer a M ??? 5 earthquake forecast for 2005-2015, assigning probabilities to 324 10 ?? 10 km cells.

  14. Real-time forecasting and predictability of catastrophic failure events: from rock failure to volcanoes and earthquakes

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Naylor, M.; Atkinson, M.; Filguera, R.; Meredith, P. G.; Brantut, N.

    2012-12-01

    Accurate prediction of catastrophic brittle failure in rocks and in the Earth presents a significant challenge on theoretical and practical grounds. The governing equations are not known precisely, but are known to produce highly non-linear behavior similar to those of near-critical dynamical systems, with a large and irreducible stochastic component due to material heterogeneity. In a laboratory setting mechanical, hydraulic and rock physical properties are known to change in systematic ways prior to catastrophic failure, often with significant non-Gaussian fluctuations about the mean signal at a given time, for example in the rate of remotely-sensed acoustic emissions. The effectiveness of such signals in real-time forecasting has never been tested before in a controlled laboratory setting, and previous work has often been qualitative in nature, and subject to retrospective selection bias, though it has often been invoked as a basis in forecasting natural hazard events such as volcanoes and earthquakes. Here we describe a collaborative experiment in real-time data assimilation to explore the limits of predictability of rock failure in a best-case scenario. Data are streamed from a remote rock deformation laboratory to a user-friendly portal, where several proposed physical/stochastic models can be analysed in parallel in real time, using a variety of statistical fitting techniques, including least squares regression, maximum likelihood fitting, Markov-chain Monte-Carlo and Bayesian analysis. The results are posted and regularly updated on the web site prior to catastrophic failure, to ensure a true and and verifiable prospective test of forecasting power. Preliminary tests on synthetic data with known non-Gaussian statistics shows how forecasting power is likely to evolve in the live experiments. In general the predicted failure time does converge on the real failure time, illustrating the bias associated with the 'benefit of hindsight' in retrospective analyses

  15. Development, testing, and applications of site-specific tsunami inundation models for real-time forecasting

    NASA Astrophysics Data System (ADS)

    Tang, L.; Titov, V. V.; Chamberlin, C. D.

    2009-12-01

    The study describes the development, testing and applications of site-specific tsunami inundation models (forecast models) for use in NOAA's tsunami forecast and warning system. The model development process includes sensitivity studies of tsunami wave characteristics in the nearshore and inundation, for a range of model grid setups, resolutions and parameters. To demonstrate the process, four forecast models in Hawaii, at Hilo, Kahului, Honolulu, and Nawiliwili are described. The models were validated with fourteen historical tsunamis and compared with numerical results from reference inundation models of higher resolution. The accuracy of the modeled maximum wave height is greater than 80% when the observation is greater than 0.5 m; when the observation is below 0.5 m the error is less than 0.3 m. The error of the modeled arrival time of the first peak is within 3% of the travel time. The developed forecast models were further applied to hazard assessment from simulated magnitude 7.5, 8.2, 8.7 and 9.3 tsunamis based on subduction zone earthquakes in the Pacific. The tsunami hazard assessment study indicates that use of a seismic magnitude alone for a tsunami source assessment is inadequate to achieve such accuracy for tsunami amplitude forecasts. The forecast models apply local bathymetric and topographic information, and utilize dynamic boundary conditions from the tsunami source function database, to provide site- and event-specific coastal predictions. Only by combining a Deep-ocean Assessment and Reporting of Tsunami-constrained tsunami magnitude with site-specific high-resolution models can the forecasts completely cover the evolution of earthquake-generated tsunami waves: generation, deep ocean propagation, and coastal inundation. Wavelet analysis of the tsunami waves suggests the coastal tsunami frequency responses at different sites are dominated by the local bathymetry, yet they can be partially related to the locations of the tsunami sources. The study

  16. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  17. Identified EM Earthquake Precursors

    NASA Astrophysics Data System (ADS)

    Jones, Kenneth, II; Saxton, Patrick

    2014-05-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for

  18. Data sensitivity in a hybrid STEP/Coulomb model for aftershock forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, S.; Jimenez Lloret, A.; Gerstenberger, M.

    2014-12-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip, and we also examine how the choice of receiver plane geometry affects the results. We find that the results are strongly sensitive to the slip models and moderately sensitive to the choice of receiver orientation. We further find that comparison of the stress fields (resulting from the slip models) with the location of events in the learning period provides advance information on whether or not a particular hybrid model will perform better than STEP.

  19. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  20. Multicomponent ensemble models to forecast induced seismicity

    NASA Astrophysics Data System (ADS)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels

  1. GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.

    2015-12-01

    The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.

  2. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  3. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  4. Tests of remote aftershock triggering by small mainshocks using Taiwan's earthquake catalog

    NASA Astrophysics Data System (ADS)

    Peng, W.; Toda, S.

    2014-12-01

    To understand earthquake interaction and forecast time-dependent seismic hazard, it is essential to evaluate which stress transfer, static or dynamic, plays a major role to trigger aftershocks and subsequent mainshocks. Felzer and Brodsky focused on small mainshocks (2≤M<3) and their aftershocks, and then argued that only dynamic stress change brings earthquake-to-earthquake triggering, whereas Richards-Dingers et al. (2010) claimed that those selected small mainshock-aftershock pairs were not earthquake-to-earthquake triggering but simultaneous occurrence of independent aftershocks following a larger earthquake or during a significant swarm sequence. We test those hypotheses using Taiwan's earthquake catalog by taking the advantage of lacking any larger event and the absence of significant seismic swarm typically seen with active volcano. Using Felzer and Brodsky's method and their standard parameters, we only found 14 mainshock-aftershock pairs occurred within 20 km distance in Taiwan's catalog from 1994 to 2010. Although Taiwan's catalog has similar number of earthquakes as California's, the number of pairs is about 10% of the California catalog. It may indicate the effect of no large earthquakes and no significant seismic swarm in the catalog. To fully understand the properties in the Taiwan's catalog, we loosened the screening parameters to earn more pairs and then found a linear aftershock density with a power law decay of -1.12±0.38 that is very similar to the one in Felzer and Brodsky. However, none of those mainshock-aftershock pairs were associated with a M7 rupture event or M6 events. To find what mechanism controlled the aftershock density triggered by small mainshocks in Taiwan, we randomized earthquake magnitude and location. We then found that those density decay in a short time period is more like a randomized behavior than mainshock-aftershock triggering. Moreover, 5 out of 6 pairs were found in a swarm-like temporal seismicity rate increase

  5. Forecasting in Complex Systems

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2014-12-01

    Complex nonlinear systems are typically characterized by many degrees of freedom, as well as interactions between the elements. Interesting examples can be found in the areas of earthquakes and finance. In these two systems, fat tails play an important role in the statistical dynamics. For earthquake systems, the Gutenberg-Richter magnitude-frequency is applicable, whereas for daily returns for the securities in the financial markets are known to be characterized by leptokurtotic statistics in which the tails are power law. Very large fluctuations are present in both systems. In earthquake systems, one has the example of great earthquakes such as the M9.1, March 11, 2011 Tohoku event. In financial systems, one has the example of the market crash of October 19, 1987. Both were largely unexpected events that severely impacted the earth and financial systems systemically. Other examples include the M9.3 Andaman earthquake of December 26, 2004, and the Great Recession which began with the fall of Lehman Brothers investment bank on September 12, 2013. Forecasting the occurrence of these damaging events has great societal importance. In recent years, national funding agencies in a variety of countries have emphasized the importance of societal relevance in research, and in particular, the goal of improved forecasting technology. Previous work has shown that both earthquakes and financial crashes can be described by a common Landau-Ginzburg-type free energy model. These metastable systems are characterized by fat tail statistics near the classical spinodal. Correlations in these systems can grow and recede, but do not imply causation, a common source of misunderstanding. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In this talk, we describe the basic phenomenology of these systems and emphasize their similarities and differences. We also consider the problem of forecast validation and verification

  6. Operational foreshock forecasting: Fifteen years after

    NASA Astrophysics Data System (ADS)

    Ogata, Y.

    2010-12-01

    We are concerned with operational forecasting of the probability that events are foreshocks of a forthcoming earthquake that is significantly larger (mainshock). Specifically, we define foreshocks as the preshocks substantially smaller than the mainshock by a magnitude gap of 0.5 or larger. The probability gain of foreshock forecast is extremely high compare to long-term forecast by renewal processes or various alarm-based intermediate-term forecasts because of a large event’s low occurrence rate in a short period and a narrow target region. Thus, it is desired to establish operational foreshock probability forecasting as seismologists have done for aftershocks. When a series of earthquakes occurs in a region, we attempt to discriminate foreshocks from a swarm or mainshock-aftershock sequence. Namely, after real time identification of an earthquake cluster using methods such as the single-link algorithm, the probability is calculated by applying statistical features that discriminate foreshocks from other types of clusters, by considering the events' stronger proximity in time and space and tendency towards chronologically increasing magnitudes. These features were modeled for probability forecasting and the coefficients of the model were estimated in Ogata et al. (1996) for the JMA hypocenter data (M≧4, 1926-1993). Currently, fifteen years has passed since the publication of the above-stated work so that we are able to present the performance and validation of the forecasts (1994-2009) by using the same model. Taking isolated events into consideration, the probability of the first events in a potential cluster being a foreshock vary in a range between 0+% and 10+% depending on their locations. This conditional forecasting performs significantly better than the unconditional (average) foreshock probability of 3.7% throughout Japan region. Furthermore, when we have the additional events in a cluster, the forecast probabilities range more widely from nearly 0% to

  7. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  8. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  9. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    USGS Publications Warehouse

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  10. The nature of earthquake prediction

    USGS Publications Warehouse

    Lindh, A.G.

    1991-01-01

    Earthquake prediction is inherently statistical. Although some people continue to think of earthquake prediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear for at least a decade that this is an unrealistic and unreasonable definition. the reality is that earthquake prediction starts from the long-term forecasts of place and magnitude, with very approximate time constraints, and progresses, at least in principle, to a gradual narrowing of the time window as data and understanding permit. Primitive long-term forecasts are clearly possible at this time on a few well-characterized fault systems. Tightly focuses monitoring experiments aimed at short-term prediction are already underway in Parkfield, California, and in the Tokai region in Japan; only time will tell how much progress will be possible. 

  11. Computing and Visualizing the Complex Dynamics of Earthquake Fault Systems: Towards Ensemble Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.

    2003-12-01

    We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.

  12. Ergodicity and Phase Transitions and Their Implications for Earthquake Forecasting.

    NASA Astrophysics Data System (ADS)

    Klein, W.

    2017-12-01

    Forecasting earthquakes or even predicting the statistical distribution of events on a given fault is extremely difficult. One reason for this difficulty is the large number of fault characteristics that can affect the distribution and timing of events. The range of stress transfer, the level of noise, and the nature of the friction force all influence the type of the events and the values of these parameters can vary from fault to fault and also vary with time. In addition, the geometrical structure of the faults and the correlation of events on different faults plays an important role in determining the event size and their distribution. Another reason for the difficulty is that the important fault characteristics are not easily measured. The noise level, fault structure, stress transfer range, and the nature of the friction force are extremely difficult, if not impossible to ascertain. Given this lack of information, one of the most useful approaches to understanding the effect of fault characteristics and the way they interact is to develop and investigate models of faults and fault systems.In this talk I will present results obtained from a series of models of varying abstraction and compare them with data from actual faults. We are able to provide a physical basis for several observed phenomena such as the earthquake cycle, thefact that some faults display Gutenburg-Richter scaling and others do not, and that some faults exhibit quasi-periodic characteristic events and others do not. I will also discuss some surprising results such as the fact that some faults are in thermodynamic equilibrium depending on the stress transfer range and the noise level. An example of an important conclusion that can be drawn from this work is that the statistical distribution of earthquake events can vary from fault to fault and that an indication of an impending large event such as accelerating moment release may be relevant on some faults but not on others.

  13. Selecting single model in combination forecasting based on cointegration test and encompassing test.

    PubMed

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.

  14. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  15. Fractals and Forecasting in Earthquakes and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  16. Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test

    PubMed Central

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061

  17. Long‐term time‐dependent probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua

    2015-01-01

    The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the

  18. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  19. EM Earthquake Precursor Detection Associated with Fluid Injection for Hydraulic Fracturing and Tectonic Sources

    NASA Astrophysics Data System (ADS)

    Jones, Kenneth B., II

    2015-04-01

    Many attempts have been made to determine an earthquake forecasting method and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic wave model, various hypotheses were formed, but only two seemed to take shape with the most interesting one requiring a magnetometer of a unique design. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, results have had wide variability and problems still reside with what exactly is forecastable and the investigative direction of a true precursor. After a number of custom rock experiments, the two hypotheses were thoroughly tested to correlate the EM wave model. The first hypothesis involved sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio wave generation. The second hypothesis resulted best with highly reproducible data, radio wave generation and detection, and worked numerous times with each laboratory test administered. In addition, internally introduced force on a small scale stressed a number of select rock types to emit radio waves well before catastrophic failure, and failure always went to completion. Comparatively, at a larger scale, highly detailed studies were procured to establish legitimate wave guides from potential hypocenters to epicenters and map the results, accordingly. Field testing in Southern California from 2006 to 2011 and outside the NE Texas town of Timpson in February, 2013 was conducted for detecting similar, laboratory generated, radio wave sources. At the Southern California field sites, signals were detected in numerous directions with varying amplitudes; therefore, a reactive approach was investigated in hopes of detecting possible aftershocks from large, tectonically related M5.0+ earthquakes. At the Timpson

  20. Large earthquake rates from geologic, geodetic, and seismological perspectives

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes

  1. Analysis of the Seismicity Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2016-12-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  2. Real-time Mainshock Forecast by Statistical Discrimination of Foreshock Clusters

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2016-12-01

    Foreshock discremination is one of the most effective ways for short-time forecast of large main shocks. Though many large earthquakes accompany their foreshocks, discreminating them from enormous small earthquakes is difficult and only probabilistic evaluation from their spatio-temporal features and magnitude evolution may be available. Logistic regression is the statistical learning method best suited to such binary pattern recognition problems where estimates of a-posteriori probability of class membership are required. Statistical learning methods can keep learning discreminating features from updating catalog and give probabilistic recognition of forecast in real time. We estimated a non-linear function of foreshock proportion by smooth spline bases and evaluate the possibility of foreshocks by the logit function. In this study, we classified foreshocks from earthquake catalog by the Japan Meteorological Agency by single-link clustering methods and learned spatial and temporal features of foreshocks by the probability density ratio estimation. We use the epicentral locations, time spans and difference in magnitudes for learning and forecasting. Magnitudes of main shocks are also predicted our method by incorporating b-values into our method. We discuss the spatial pattern of foreshocks from the classifier composed by our model. We also implement a back test to validate predictive performance of the model by this catalog.

  3. Pre-earthquake signatures in atmosphere/ionosphere and their potential for short-term earthquake forecasting. Case studies for 2015

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Davidenko, Dmitry; Hernández-Pajares, Manuel; García-Rigo, Alberto; Petrrov, Leonid; Hatzopoulos, Nikolaos; Kafatos, Menas

    2016-04-01

    We are conducting validation studies on temporal-spatial pattern of pre-earthquake signatures in atmosphere and ionosphere associated with M>7 earthquakes in 2015. Our approach is based on the Lithosphere Atmosphere Ionosphere Coupling (LAIC) physical concept integrated with Multi-sensor-networking analysis (MSNA) of several non-correlated observations that can potentially yield predictive information. In this study we present two type of results: 1/ prospective testing of MSNA-LAIC for M7+ in 2015 and 2:/ retrospective analysis of temporal-spatial variations in atmosphere and ionosphere several days before the two M7.8 and M7.3 in Nepal and M8.3 Chile earthquakes. During the prospective test 18 earthquakes M>7 occurred worldwide, from which 15 were alerted in advance with the time lag between 2 up to 30 days and with different level of accuracy. The retrospective analysis included different physical parameters from space: Outgoing long-wavelength radiation (OLR obtained from NPOES, NASA/AQUA) on the top of the atmosphere, Atmospheric potential (ACP obtained from NASA assimilation models) and electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC). Concerning M7.8 in Nepal of April 24, rapid increase of OLR reached the maximum on April 21-22. GPS/TEC data indicate maximum value during April 22-24 periods. Strong negative TEC anomaly was detected in the crest of EIA (Equatorial Ionospheric Anomaly) on April 21st and strong positive on April 24th, 2015. For May 12 M7.3 aftershock similar pre- earthquake patterns in OLR and GPS/TEC were observed. Concerning the M8.3 Chile of Sept 16, the OLR strongest transient feature was observed of Sept 12. GPS/TEC analysis data confirm abnormal values on Sept 14. Also on the same day the degradation of EIA and disappearance of the crests of EIA as is characteristic for pre-dawn and early morning hours (11 LT) was observed. On Sept 16 co-seismic ionospheric signatures consistent with defined circular

  4. Testing an Earthquake Prediction Algorithm: The 2016 New Zealand and Chile Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2017-05-01

    The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquake prediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

  5. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    planners and the media, a forecast product, which is based on wrong assumptions that violate the best-documented earthquake statistics in California, which accuracy was not investigated, and which forecasts were not tested in a rigorous way.

  6. Thermal Infrared Anomalies of Several Strong Earthquakes

    PubMed Central

    Wei, Congxin; Guo, Xiao; Qin, Manzhong

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of “time-frequency relative power spectrum.” (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  7. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  8. Statistical validation of earthquake related observations

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  9. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  10. Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?

    NASA Astrophysics Data System (ADS)

    Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.

    2017-10-01

    Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.

  11. Real-time forecasts of tomorrow's earthquakes in California: a new mapping tool

    USGS Publications Warehouse

    Gerstenberger, Matt; Wiemer, Stefan; Jones, Lucy

    2004-01-01

    We have derived a multi-model approach to calculate time-dependent earthquake hazard resulting from earthquake clustering. This file report explains the theoretical background behind the approach, the specific details that are used in applying the method to California, as well as the statistical testing to validate the technique. We have implemented our algorithm as a real-time tool that has been automatically generating short-term hazard maps for California since May of 2002, at http://step.wr.usgs.gov

  12. Self-organization in leaky threshold systems: The influence of near-mean field dynamics and its implications for earthquakes, neurobiology, and forecasting

    PubMed Central

    Rundle, J. B.; Tiampo, K. F.; Klein, W.; Sá Martins, J. S.

    2002-01-01

    Threshold systems are known to be some of the most important nonlinear self-organizing systems in nature, including networks of earthquake faults, neural networks, superconductors and semiconductors, and the World Wide Web, as well as political, social, and ecological systems. All of these systems have dynamics that are strongly correlated in space and time, and all typically display a multiplicity of spatial and temporal scales. Here we discuss the physics of self-organization in earthquake threshold systems at two distinct scales: (i) The “microscopic” laboratory scale, in which consideration of results from simulations leads to dynamical equations that can be used to derive the results obtained from sliding friction experiments, and (ii) the “macroscopic” earthquake fault-system scale, in which the physics of strongly correlated earthquake fault systems can be understood by using time-dependent state vectors defined in a Hilbert space of eigenstates, similar in many respects to the mathematics of quantum mechanics. In all of these systems, long-range interactions induce the existence of locally ergodic dynamics. The existence of dissipative effects leads to the appearance of a “leaky threshold” dynamics, equivalent to a new scaling field that controls the size of nucleation events relative to the size of background fluctuations. At the macroscopic earthquake fault-system scale, these ideas show considerable promise as a means of forecasting future earthquake activity. PMID:11875204

  13. Stationarity test with a direct test for heteroskedasticity in exchange rate forecasting models

    NASA Astrophysics Data System (ADS)

    Khin, Aye Aye; Chau, Wong Hong; Seong, Lim Chee; Bin, Raymond Ling Leh; Teng, Kevin Low Lock

    2017-05-01

    Global economic has been decreasing in the recent years, manifested by the greater exchange rates volatility on international commodity market. This study attempts to analyze some prominent exchange rate forecasting models on Malaysian commodity trading: univariate ARIMA, ARCH and GARCH models in conjunction with stationarity test on residual diagnosis direct testing of heteroskedasticity. All forecasting models utilized the monthly data from 1990 to 2015. Given a total of 312 observations, the data used to forecast both short-term and long-term exchange rate. The forecasting power statistics suggested that the forecasting performance of ARIMA (1, 1, 1) model is more efficient than the ARCH (1) and GARCH (1, 1) models. For ex-post forecast, exchange rate was increased from RM 3.50 per USD in January 2015 to RM 4.47 per USD in December 2015 based on the baseline data. For short-term ex-ante forecast, the analysis results indicate a decrease in exchange rate on 2016 June (RM 4.27 per USD) as compared with 2015 December. A more appropriate forecasting method of exchange rate is vital to aid the decision-making process and planning on the sustainable commodities' production in the world economy.

  14. Rapid Tsunami Inundation Forecast from Near-field or Far-field Earthquakes using Pre-computed Tsunami Database: Pelabuhan Ratu, Indonesia

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Setiyono, U.; Satake, K.; Fujii, Y.

    2017-12-01

    We built pre-computed tsunami inundation database in Pelabuhan Ratu, one of tsunami-prone areas on the southern coast of Java, Indonesia. The tsunami database can be employed for a rapid estimation of tsunami inundation during an event. The pre-computed tsunami waveforms and inundations are from a total of 340 scenarios ranging from 7.5 to 9.2 in moment magnitude scale (Mw), including simple fault models of 208 thrust faults and 44 tsunami earthquakes on the plate interface, as well as 44 normal faults and 44 reverse faults in the outer-rise region. Using our tsunami inundation forecasting algorithm (NearTIF), we could rapidly estimate the tsunami inundation in Pelabuhan Ratu for three different hypothetical earthquakes. The first hypothetical earthquake is a megathrust earthquake type (Mw 9.0) offshore Sumatra which is about 600 km from Pelabuhan Ratu to represent a worst-case event in the far-field. The second hypothetical earthquake (Mw 8.5) is based on a slip deficit rate estimation from geodetic measurements and represents a most likely large event near Pelabuhan Ratu. The third hypothetical earthquake is a tsunami earthquake type (Mw 8.1) which often occur south off Java. We compared the tsunami inundation maps produced by the NearTIF algorithm with results of direct forward inundation modeling for the hypothetical earthquakes. The tsunami inundation maps produced from both methods are similar for the three cases. However, the tsunami inundation map from the inundation database can be obtained in much shorter time (1 min) than the one from a forward inundation modeling (40 min). These indicate that the NearTIF algorithm based on pre-computed inundation database is reliable and useful for tsunami warning purposes. This study also demonstrates that the NearTIF algorithm can work well even though the earthquake source is located outside the area of fault model database because it uses a time shifting procedure for the best-fit scenario searching.

  15. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  16. Future WGCEP Models and the Need for Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2008-12-01

    The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).

  17. Numerical Modeling and Forecasting of Strong Sumatra Earthquakes

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Yin, C.

    2007-12-01

    ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.

  18. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  19. Earthquake hazards: a national threat

    USGS Publications Warehouse

    ,

    2006-01-01

    Earthquakes are one of the most costly natural hazards faced by the Nation, posing a significant risk to 75 million Americans in 39 States. The risks that earthquakes pose to society, including death, injury, and economic loss, can be greatly reduced by (1) better planning, construction, and mitigation practices before earthquakes happen, and (2) providing critical and timely information to improve response after they occur. As part of the multi-agency National Earthquake Hazards Reduction Program, the U.S. Geological Survey (USGS) has the lead Federal responsibility to provide notification of earthquakes in order to enhance public safety and to reduce losses through effective forecasts based on the best possible scientific information.

  20. Development of regional earthquake early warning and structural health monitoring system and real-time ground motion forecasting using front-site waveform data (Invited)

    NASA Astrophysics Data System (ADS)

    Motosaka, M.

    2009-12-01

    This paper presents firstly, the development of an integrated regional earthquake early warning (EEW) system having on-line structural health monitoring (SHM) function, in Miyagi prefecture, Japan. The system makes it possible to provide more accurate, reliable and immediate earthquake information for society by combining the national (JMA/NIED) EEW system, based on advanced real-time communication technology. The author has planned to install the EEW/SHM system to the public buildings around Sendai, a million city of north-eastern Japan. The system has been so far implemented in two buildings; one is in Sendai, and the other in Oshika, a front site on the Pacific Ocean coast for the approaching Miyagi-ken Oki earthquake. The data from the front-site and the on-site are processed by the analysis system which was installed at the analysis center of Disaster Control Research Center, Tohoku University. The real-time earthquake information from JMA is also received at the analysis center. The utilization of the integrated EEW/SHM system is addressed together with future perspectives. Examples of the obtained data are also described including the amplitude depending dynamic characteristics of the building in Sendai before, during, and after the 2008/6/14 Iwate-Miyagi Nairiku Earthquake, together with the historical change of dynamic characteristics for 40 years. Secondary, this paper presents an advanced methodology based on Artificial Neural Networks (ANN) for forward forecasting of ground motion parameters, not only PGA, PGV, but also Spectral information before S-wave arrival using initial part of P-waveform at a front site. The estimated ground motion information can be used as warning alarm for earthquake damage reduction. The Fourier Amplitude Spectra (FAS) estimated before strong shaking with high accuracy can be used for advanced engineering applications, e.g. feed-forward structural control of a building of interest. The validity and applicability of the method

  1. Mitigating earthquakes; the federal role

    USGS Publications Warehouse

    Press, F.

    1977-01-01

    With rapid approach of a capability to make reliable earthquake forecasts, it essential that the Federal Government play a strong, positive role in formulating and implementing plans to reduce earthquake hazards. Many steps are being taken in this direction, with the President looking to the Office of Science and Technology Policy (OSTP) in his Executive Office to provide leadership in establishing and coordinating Federal activities. 

  2. Historical and recent large megathrust earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Madariaga, R.

    2018-05-01

    Recent earthquakes in Chile, 2014, Mw 8.2 Iquique, 2015, Mw 8.3 Illapel and 2016, Mw 7.6 Chiloé have put in evidence some problems with the straightforward application of ideas about seismic gaps, earthquake periodicity and the general forecast of large megathrust earthquakes. In northern Chile, before the 2014 Iquique earthquake 4 large earthquakes were reported in written chronicles, 1877, 1786, 1615 and 1543; in North-Central Chile, before the 2015 Illapel event, 3 large earthquakes 1943, 1880, 1730 were reported; and the 2016 Chiloé earthquake occurred in the southern zone of the 1960 Valdivia megathrust rupture, where other large earthquakes occurred in 1575, 1737 and 1837. The periodicity of these events has been proposed as a good long-term forecasting. However, the seismological aspects of historical Chilean earthquakes were inferred mainly from old chronicles written before subduction in Chile was discovered. Here we use the original description of earthquakes to re-analyze the historical archives. Our interpretation shows that a-priori ideas, like seismic gaps and characteristic earthquakes, influenced the estimation of magnitude, location and rupture area of the older Chilean events. On the other hand, the advance in the characterization of the rheological aspects that controlled the contact between Nazca and South-American plate and the study of tsunami effects provide better estimations of the location of historical earthquakes along the seismogenic plate interface. Our re-interpretation of historical earthquakes shows a large diversity of earthquakes types; there is a major difference between giant earthquakes that break the entire plate interface and those of Mw 8.0 that only break a portion of it.

  3. Expanding the Delivery of Rapid Earthquake Information and Warnings for Response and Recovery

    NASA Astrophysics Data System (ADS)

    Blanpied, M. L.; McBride, S.; Hardebeck, J.; Michael, A. J.; van der Elst, N.

    2017-12-01

    Scientific organizations like the United States Geological Survey (USGS) release information to support effective responses during an earthquake crisis. Information is delivered to the White House, the National Command Center, the Departments of Defense, Homeland Security (including FEMA), Transportation, Energy, and Interior. Other crucial stakeholders include state officials and decision makers, emergency responders, numerous public and private infrastructure management centers (e.g., highways, railroads and pipelines), the media, and the public. To meet the diverse information requirements of these users, rapid earthquake notifications have been developed to be delivered by e-mail and text message, as well as a suite of earthquake information resources such as ShakeMaps, Did You Feel It?, PAGER impact estimates, and data are delivered via the web. The ShakeAlert earthquake early warning system being developed for the U.S. West Coast will identify and characterize an earthquake a few seconds after it begins, estimate the likely intensity of ground shaking, and deliver brief but critically important warnings to people and infrastructure in harm's way. Currently the USGS is also developing a capability to deliver Operational Earthquake Forecasts (OEF). These provide estimates of potential seismic behavior after large earthquakes and during evolving aftershock sequences. Similar work is underway in New Zealand, Japan, and Italy. In the development of OEF forecasts, social science research conducted during these sequences indicates that aftershock forecasts are valued for a variety of reasons, from informing critical response and recovery decisions to psychologically preparing for more earthquakes. New tools will allow users to customize map-based, spatiotemporal forecasts to their specific needs. Hazard curves and other advanced information will also be available. For such authoritative information to be understood and used during the pressures of an earthquake

  4. Earthquake forecast for the Wasatch Front region of the Intermountain West

    USGS Publications Warehouse

    DuRoss, Christopher B.

    2016-04-18

    The Working Group on Utah Earthquake Probabilities has assessed the probability of large earthquakes in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater earthquakes and a 57 percent probability of one or more magnitude 6.0 or greater earthquakes in the region in the next 50 years. These results highlight the threat of large earthquakes in the region.

  5. Earthquake precursors: activation or quiescence?

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.

    2011-10-01

    We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These

  6. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  7. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    PubMed

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  8. A post-Tohoku earthquake review of earthquake probabilities in the Southern Kanto District, Japan

    NASA Astrophysics Data System (ADS)

    Somerville, Paul G.

    2014-12-01

    The 2011 Mw 9.0 Tohoku earthquake generated an aftershock sequence that affected a large part of northern Honshu, and has given rise to widely divergent forecasts of changes in earthquake occurrence probabilities in northern Honshu. The objective of this review is to assess these forecasts as they relate to potential changes in the occurrence probabilities of damaging earthquakes in the Kanto Region. It is generally agreed that the 2011 Mw 9.0 Tohoku earthquake increased the stress on faults in the southern Kanto district. Toda and Stein (Geophys Res Lett 686, 40: doi:10.1002, 2013) further conclude that the probability of earthquakes in the Kanto Corridor has increased by a factor of 2.5 for the time period 11 March 2013 to 10 March 2018 in the Kanto Corridor. Estimates of earthquake probabilities in a wider region of the Southern Kanto District by Nanjo et al. (Geophys J Int, doi:10.1093, 2013) indicate that any increase in the probability of earthquakes is insignificant in this larger region. Uchida et al. (Earth Planet Sci Lett 374: 81-91, 2013) conclude that the Philippine Sea plate the extends well north of the northern margin of Tokyo Bay, inconsistent with the Kanto Fragment hypothesis of Toda et al. (Nat Geosci, 1:1-6,2008), which attributes deep earthquakes in this region, which they term the Kanto Corridor, to a broken fragment of the Pacific plate. The results of Uchida and Matsuzawa (J Geophys Res 115:B07309, 2013)support the conclusion that fault creep in southern Kanto may be slowly relaxing the stress increase caused by the Tohoku earthquake without causing more large earthquakes. Stress transfer calculations indicate a large stress transfer to the Off Boso Segment as a result of the 2011 Tohoku earthquake. However, Ozawa et al. (J Geophys Res 117:B07404, 2012) used onshore GPS measurements to infer large post-Tohoku creep on the plate interface in the Off-Boso region, and Uchida and Matsuzawa (ibid.) measured similar large creep off the Boso

  9. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  10. Strain rates, stress markers and earthquake clustering (Invited)

    NASA Astrophysics Data System (ADS)

    Fry, B.; Gerstenberger, M.; Abercrombie, R. E.; Reyners, M.; Eberhart-Phillips, D. M.

    2013-12-01

    The 2010-present Canterbury earthquakes comprise a well-recorded sequence in a relatively low strain-rate shallow crustal region. We present new scientific results to test the hypothesis that: Earthquake sequences in low-strain rate areas experience high stress drop events, low-post seismic relaxation, and accentuated seismic clustering. This hypothesis is based on a physical description of the aftershock process in which the spatial distribution of stress accumulation and stress transfer are controlled by fault strength and orientation. Following large crustal earthquakes, time dependent forecasts are often developed by fitting parameters defined by Omori's aftershock decay law. In high-strain rate areas, simple forecast models utilizing a single p-value fit observed aftershock sequences well. In low-strain rate areas such as Canterbury, assumptions of simple Omori decay may not be sufficient to capture the clustering (sub-sequence) nature exhibited by the punctuated rise in activity following significant child events. In Canterbury, the moment release is more clustered than in more typical Omori sequences. The individual earthquakes in these clusters also exhibit somewhat higher stress drops than in the average crustal sequence in high-strain rate regions, suggesting the earthquakes occur on strong Andersonian-oriented faults, possibly juvenile or well-healed . We use the spectral ratio procedure outlined in (Viegas et al., 2010) to determine corner frequencies and Madariaga stress-drop values for over 800 events in the sequence. Furthermore, we will discuss the relevance of tomographic results of Reyners and Eberhart-Phillips (2013) documenting post-seismic stress-driven fluid processes following the three largest events in the sequence as well as anisotropic patterns in surface wave tomography (Fry et al., 2013). These tomographic studies are both compatible with the hypothesis, providing strong evidence for the presence of widespread and hydrated regional

  11. Shaking table test and dynamic response prediction on an earthquake-damaged RC building

    NASA Astrophysics Data System (ADS)

    Xianguo, Ye; Jiaru, Qian; Kangning, Li

    2004-12-01

    This paper presents the results from shaking table tests of a one-tenth-scale reinforced concrete (RC) building model. The test model is a protype of a building that was seriously damaged during the 1985 Mexico earthquake. The input ground excitation used during the test was from the records obtained near the site of the prototype building during the 1985 and 1995 Mexico earthquakes. The tests showed that the damage pattern of the test model agreed well with that of the prototype building. Analytical prediction of earthquake response has been conducted for the prototype building using a sophisticated 3-D frame model. The input motion used for the dynamic analysis was the shaking table test measurements with similarity transformation. The comparison of the analytical results and the shaking table test results indicates that the response of the RC building to minor and the moderate earthquakes can be predicated well. However, there is difference between the predication and the actual response to the major earthquake.

  12. The Eruption Forecasting Information System: Volcanic Eruption Forecasting Using Databases

    NASA Astrophysics Data System (ADS)

    Ogburn, S. E.; Harpel, C. J.; Pesicek, J. D.; Wellik, J.

    2016-12-01

    Forecasting eruptions, including the onset size, duration, location, and impacts, is vital for hazard assessment and risk mitigation. The Eruption Forecasting Information System (EFIS) project is a new initiative of the US Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) and will advance VDAP's ability to forecast the outcome of volcanic unrest. The project supports probability estimation for eruption forecasting by creating databases useful for pattern recognition, identifying monitoring data thresholds beyond which eruptive probabilities increase, and for answering common forecasting questions. A major component of the project is a global relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest. This module allows us to query eruption chronologies, monitoring data, descriptive information, operational data, and eruptive phases alongside other global databases, such as WOVOdat and the Global Volcanism Program. The EFIS database is in the early stages of development and population; thus, this contribution also is a request for feedback from the community. Preliminary data are already benefitting several research areas. For example, VDAP provided a forecast of the likely remaining eruption duration for Sinabung volcano, Indonesia, using global data taken from similar volcanoes in the DomeHaz database module, in combination with local monitoring time-series data. In addition, EFIS seismologists used a beta-statistic test and empirically-derived thresholds to identify distal volcano-tectonic earthquake anomalies preceding Alaska volcanic eruptions during 1990-2015 to retrospectively evaluate Alaska Volcano Observatory eruption precursors. This has identified important considerations for selecting analog volcanoes for global data analysis, such as differences between

  13. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  14. Afterslip behavior following the M6.0, 2014 South Napa earthquake with implications for afterslip forecasting on other seismogenic faults

    USGS Publications Warehouse

    Lienkaemper, James J.; DeLong, Stephen B.; Domrose, Carolyn J; Rosa, Carla M.

    2016-01-01

    The M6.0, 24 Aug. 2014 South Napa, California, earthquake exhibited unusually large slip for a California strike-slip event of its size with a maximum coseismic surface slip of 40-50 cm in the north section of the 15 km-long rupture. Although only minor (<10 cm) surface slip occurred coseismically in the southern 9-km section of the rupture, there was considerable postseismic slip, so that the maximum total slip one year after the event approached 40-50 cm, about equal to the coseismic maximum in the north. We measured the accumulation of postseismic surface slip on four, ~100-m-long alignment arrays for one year following the event. Because prolonged afterslip can delay reconstruction of fault-damaged buildings and infrastructure, we analyzed its gradual decay to estimate when significant afterslip would likely end. This forecasting of Napa afterslip suggests how we might approach the scientific and engineering challenges of afterslip from a much larger M~7 earthquake anticipated on the nearby, urban Hayward Fault. However, we expect its afterslip to last much longer than one year.The M6.0, 24 Aug. 2014 South Napa, California, earthquake exhibited unusually large slip for a California strike-slip event of its size with a maximum coseismic surface slip of 40-50 cm in the north section of the 15 km-long rupture. Although only minor (<10 cm) surface slip occurred coseismically in the southern 9-km section of the rupture, there was considerable postseismic slip, so that the maximum total slip one year after the event approached 40-50 cm, about equal to the coseismic maximum in the north. We measured the accumulation of postseismic surface slip on four, ~100-m-long alignment arrays for one year following the event. Because prolonged afterslip can delay reconstruction of fault-damaged buildings and infrastructure, we analyzed its gradual decay to estimate when significant afterslip would likely end. This forecasting of Napa afterslip suggests how we might approach the

  15. Earthquake Early Warning ShakeAlert System: Testing and certification platform

    USGS Publications Warehouse

    Cochran, Elizabeth S.; Kohler, Monica D.; Given, Douglas; Guiwits, Stephen; Andrews, Jennifer; Meier, Men-Andrin; Ahmad, Mohammad; Henson, Ivan; Hartog, Renate; Smith, Deborah

    2017-01-01

    Earthquake early warning systems provide warnings to end users of incoming moderate to strong ground shaking from earthquakes. An earthquake early warning system, ShakeAlert, is providing alerts to beta end users in the western United States, specifically California, Oregon, and Washington. An essential aspect of the earthquake early warning system is the development of a framework to test modifications to code to ensure functionality and assess performance. In 2016, a Testing and Certification Platform (TCP) was included in the development of the Production Prototype version of ShakeAlert. The purpose of the TCP is to evaluate the robustness of candidate code that is proposed for deployment on ShakeAlert Production Prototype servers. TCP consists of two main components: a real‐time in situ test that replicates the real‐time production system and an offline playback system to replay test suites. The real‐time tests of system performance assess code optimization and stability. The offline tests comprise a stress test of candidate code to assess if the code is production ready. The test suite includes over 120 events including local, regional, and teleseismic historic earthquakes, recentering and calibration events, and other anomalous and potentially problematic signals. Two assessments of alert performance are conducted. First, point‐source assessments are undertaken to compare magnitude, epicentral location, and origin time with the Advanced National Seismic System Comprehensive Catalog, as well as to evaluate alert latency. Second, we describe assessment of the quality of ground‐motion predictions at end‐user sites by comparing predicted shaking intensities to ShakeMaps for historic events and implement a threshold‐based approach that assesses how often end users initiate the appropriate action, based on their ground‐shaking threshold. TCP has been developed to be a convenient streamlined procedure for objectively testing algorithms, and it has

  16. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  17. Optimal Scaling of Aftershock Zones using Ground Motion Forecasts

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.

    2018-02-01

    The spatial distribution of aftershocks following major earthquakes has received significant attention due to the shaking hazard these events pose for structures and populations in the affected region. Forecasting the spatial distribution of aftershock events is an important part of the estimation of future seismic hazard. A simple spatial shape for the zone of activity has often been assumed in the form of an ellipse having semimajor axis to semiminor axis ratio of 2.0. However, since an important application of these calculations is the estimation of ground shaking hazard, an effective criterion for forecasting future aftershock impacts is to use ground motion prediction equations (GMPEs) in addition to the more usual approach of using epicentral or hypocentral locations. Based on these ideas, we present an aftershock model that uses self-similarity and scaling relations to constrain parameters as an option for such hazard assessment. We fit the spatial aspect ratio to previous earthquake sequences in the studied regions, and demonstrate the effect of the fitting on the likelihood of post-disaster ground motion forecasts for eighteen recent large earthquakes. We find that the forecasts in most geographic regions studied benefit from this optimization technique, while some are better suited to the use of the a priori aspect ratio.

  18. Physics-based forecasting of induced seismicity at Groningen gas field, the Netherlands

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny

    2017-08-01

    Earthquakes induced by natural gas extraction from the Groningen reservoir, the Netherlands, put local communities at risk. Responsible operation of a reservoir whose gas reserves are of strategic importance to the country requires understanding of the link between extraction and earthquakes. We synthesize observations and a model for Groningen seismicity to produce forecasts for felt seismicity (M > 2.5) in the period February 2017 to 2024. Our model accounts for poroelastic earthquake triggering and rupture on the 325 largest reservoir faults, using an ensemble approach to model unknown heterogeneity and replicate earthquake statistics. We calculate probability distributions for key model parameters using a Bayesian method that incorporates the earthquake observations with a nonhomogeneous Poisson process. Our analysis indicates that the Groningen reservoir was not critically stressed prior to the start of production. Epistemic uncertainty and aleatoric uncertainty are incorporated into forecasts for three different future extraction scenarios. The largest expected earthquake was similar for all scenarios, with a 5% likelihood of exceeding M 4.0.

  19. Implications for earthquake risk reduction in the United States from the Kocaeli, Turkey, earthquake of August 17, 1999

    USGS Publications Warehouse

    ,

    2000-01-01

    This report documents implications for earthquake risk reduction in the U.S. The magnitude 7.4 earthquake caused 17,127 deaths, 43,953 injuries, and displaced more than 250,000 people from their homes. The report warns that similar disasters are possible in the United States where earthquakes of comparable size strike the heart of American urban areas. Another concern described in the report is the delayed emergency response that was caused by the inadequate seismic monitoring system in Turkey, a problem that contrasts sharply with rapid assessment and response to the September Chi-Chi earthquake in Taiwan. Additionally, the experience in Turkey suggests that techniques for forecasting earthquakes may be improving.

  20. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  1. Applying Binary Forecasting Approaches to Induced Seismicity in the Western Canada Sedimentary Basin

    NASA Astrophysics Data System (ADS)

    Kahue, R.; Shcherbakov, R.

    2016-12-01

    The Western Canada Sedimentary Basin has been chosen as a focus due to an increase in the recent observed seismicity there which is most likely linked to anthropogenic activities related to unconventional oil and gas exploration. Seismicity caused by these types of activities is called induced seismicity. The occurrence of moderate to larger induced earthquakes in areas where critical infrastructure is present can be potentially problematic. Here we use a binary forecast method to analyze past seismicity and well production data in order to quantify future areas of increased seismicity. This method splits the given region into spatial cells. The binary forecast method used here has been suggested in the past to retroactively forecast large earthquakes occurring globally in areas called alarm cells. An alarm cell, or alert zone, is a bin in which there is a higher likelihood for earthquakes to occur based on previous data. The first method utilizes the cumulative Benioff strain, based on earthquakes that had occurred in each bin above a given magnitude over a time interval called the training period. The second method utilizes the cumulative well production data within each bin. Earthquakes that occurred within an alert zone in the retrospective forecast period contribute to the hit rate, while alert zones that did not have an earthquake occur within them in the forecast period contribute to the false alarm rate. In the resulting analysis the hit rate and false alarm rate are determined after optimizing and modifying the initial parameters using the receiver operating characteristic diagram. It is found that when modifying the cell size and threshold magnitude parameters within various training periods, hit and false alarm rates are obtained for specific regions in Western Canada using both recent seismicity and cumulative well production data. Certain areas are thus shown to be more prone to potential larger earthquakes based on both datasets. This has implications

  2. Earthquake Prediction in a Big Data World

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2016-12-01

    The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquake prediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of Earthquake Prediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance

  3. A parimutuel gambling perspective to compare probabilistic seismicity forecasts

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2014-10-01

    Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.

  4. Understanding Earthquake Hazard & Disaster in Himalaya - A Perspective on Earthquake Forecast in Himalayan Region of South Central Tibet

    NASA Astrophysics Data System (ADS)

    Shanker, D.; Paudyal, ,; Singh, H.

    2010-12-01

    It is not only the basic understanding of the phenomenon of earthquake, its resistance offered by the designed structure, but the understanding of the socio-economic factors, engineering properties of the indigenous materials, local skill and technology transfer models are also of vital importance. It is important that the engineering aspects of mitigation should be made a part of public policy documents. Earthquakes, therefore, are and were thought of as one of the worst enemies of mankind. Due to the very nature of release of energy, damage is evident which, however, will not culminate in a disaster unless it strikes a populated area. The word mitigation may be defined as the reduction in severity of something. The Earthquake disaster mitigation, therefore, implies that such measures may be taken which help reduce severity of damage caused by earthquake to life, property and environment. While “earthquake disaster mitigation” usually refers primarily to interventions to strengthen the built environment, and “earthquake protection” is now considered to include human, social and administrative aspects of reducing earthquake effects. It should, however, be noted that reduction of earthquake hazards through prediction is considered to be the one of the effective measures, and much effort is spent on prediction strategies. While earthquake prediction does not guarantee safety and even if predicted correctly the damage to life and property on such a large scale warrants the use of other aspects of mitigation. While earthquake prediction may be of some help, mitigation remains the main focus of attention of the civil society. Present study suggests that anomalous seismic activity/ earthquake swarm existed prior to the medium size earthquakes in the Nepal Himalaya. The mainshocks were preceded by the quiescence period which is an indication for the occurrence of future seismic activity. In all the cases, the identified episodes of anomalous seismic activity were

  5. Detection of change points in underlying earthquake rates, with application to global mega-earthquakes

    NASA Astrophysics Data System (ADS)

    Touati, Sarah; Naylor, Mark; Main, Ian

    2016-02-01

    The recent spate of mega-earthquakes since 2004 has led to speculation of an underlying change in the global `background' rate of large events. At a regional scale, detecting changes in background rate is also an important practical problem for operational forecasting and risk calculation, for example due to volcanic processes, seismicity induced by fluid injection or withdrawal, or due to redistribution of Coulomb stress after natural large events. Here we examine the general problem of detecting changes in background rate in earthquake catalogues with and without correlated events, for the first time using the Bayes factor as a discriminant for models of varying complexity. First we use synthetic Poisson (purely random) and Epidemic-Type Aftershock Sequence (ETAS) models (which also allow for earthquake triggering) to test the effectiveness of many standard methods of addressing this question. These fall into two classes: those that evaluate the relative likelihood of different models, for example using Information Criteria or the Bayes Factor; and those that evaluate the probability of the observations (including extreme events or clusters of events) under a single null hypothesis, for example by applying the Kolmogorov-Smirnov and `runs' tests, and a variety of Z-score tests. The results demonstrate that the effectiveness among these tests varies widely. Information Criteria worked at least as well as the more computationally expensive Bayes factor method, and the Kolmogorov-Smirnov and runs tests proved to be the relatively ineffective in reliably detecting a change point. We then apply the methods tested to events at different thresholds above magnitude M ≥ 7 in the global earthquake catalogue since 1918, after first declustering the catalogue. This is most effectively done by removing likely correlated events using a much lower magnitude threshold (M ≥ 5), where triggering is much more obvious. We find no strong evidence that the background rate of large

  6. Tsunami Forecast Progress Five Years After Indonesian Disaster

    NASA Astrophysics Data System (ADS)

    Titov, Vasily V.; Bernard, Eddie N.; Weinstein, Stuart A.; Kanoglu, Utku; Synolakis, Costas E.

    2010-05-01

    Almost five years after the 26 December 2004 Indian Ocean tragedy, tsunami warnings are finally benefiting from decades of research toward effective model-based forecasts. Since the 2004 tsunami, two seminal advances have been (i) deep-ocean tsunami measurements with tsunameters and (ii) their use in accurately forecasting tsunamis after the tsunami has been generated. Using direct measurements of deep-ocean tsunami heights, assimilated into numerical models for specific locations, greatly improves the real-time forecast accuracy over earthquake-derived magnitude estimates of tsunami impact. Since 2003, this method has been used to forecast tsunamis at specific harbors for different events in the Pacific and Indian Oceans. Recent tsunamis illustrated how this technology is being adopted in global tsunami warning operations. The U.S. forecasting system was used by both research and operations to evaluate the tsunami hazard. Tests demonstrated the effectiveness of operational tsunami forecasting using real-time deep-ocean data assimilated into forecast models. Several examples also showed potential of distributed forecast tools. With IOC and USAID funding, NOAA researchers at PMEL developed the Community Model Interface for Tsunami (ComMIT) tool and distributed it through extensive capacity-building sessions in the Indian Ocean. Over hundred scientists have been trained in tsunami inundation mapping, leading to the first generation of inundation models for many Indian Ocean shorelines. These same inundation models can also be used for real-time tsunami forecasts as was demonstrated during several events. Contact Information Vasily V. Titov, Seattle, Washington, USA, 98115

  7. Detecting and Characterizing Repeating Earthquake Sequences During Volcanic Eruptions

    NASA Astrophysics Data System (ADS)

    Tepp, G.; Haney, M. M.; Wech, A.

    2017-12-01

    A major challenge in volcano seismology is forecasting eruptions. Repeating earthquake sequences often precede volcanic eruptions or lava dome activity, providing an opportunity for short-term eruption forecasting. Automatic detection of these sequences can lead to timely eruption notification and aid in continuous monitoring of volcanic systems. However, repeating earthquake sequences may also occur after eruptions or along with magma intrusions that do not immediately lead to an eruption. This additional challenge requires a better understanding of the processes involved in producing these sequences to distinguish those that are precursory. Calculation of the inverse moment rate and concepts from the material failure forecast method can lead to such insights. The temporal evolution of the inverse moment rate is observed to differ for precursory and non-precursory sequences, and multiple earthquake sequences may occur concurrently. These observations suggest that sequences may occur in different locations or through different processes. We developed an automated repeating earthquake sequence detector and near real-time alarm to send alerts when an in-progress sequence is identified. Near real-time inverse moment rate measurements can further improve our ability to forecast eruptions by allowing for characterization of sequences. We apply the detector to eruptions of two Alaskan volcanoes: Bogoslof in 2016-2017 and Redoubt Volcano in 2009. The Bogoslof eruption produced almost 40 repeating earthquake sequences between its start in mid-December 2016 and early June 2017, 21 of which preceded an explosive eruption, and 2 sequences in the months before eruptive activity. Three of the sequences occurred after the implementation of the alarm in late March 2017 and successfully triggered alerts. The nearest seismometers to Bogoslof are over 45 km away, requiring a detector that can work with few stations and a relatively low signal-to-noise ratio. During the Redoubt

  8. On the adaptive daily forecasting of seismic aftershock hazard

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hossein; Jalayer, Fatemeh; Asprone, Domenico; Lombardi, Anna Maria; Marzocchi, Warner; Prota, Andrea; Manfredi, Gaetano

    2013-04-01

    Post-earthquake ground motion hazard assessment is a fundamental initial step towards time-dependent seismic risk assessment for buildings in a post main-shock environment. Therefore, operative forecasting of seismic aftershock hazard forms a viable support basis for decision-making regarding search and rescue, inspection, repair, and re-occupation in a post main-shock environment. Arguably, an adaptive procedure for integrating the aftershock occurrence rate together with suitable ground motion prediction relations is key to Probabilistic Seismic Aftershock Hazard Assessment (PSAHA). In the short-term, the seismic hazard may vary significantly (Jordan et al., 2011), particularly after the occurrence of a high magnitude earthquake. Hence, PSAHA requires a reliable model that is able to track the time evolution of the earthquake occurrence rates together with suitable ground motion prediction relations. This work focuses on providing adaptive daily forecasts of the mean daily rate of exceeding various spectral acceleration values (the aftershock hazard). Two well-established earthquake occurrence models suitable for daily seismicity forecasts associated with the evolution of an aftershock sequence, namely, the modified Omori's aftershock model and the Epidemic Type Aftershock Sequence (ETAS) are adopted. The parameters of the modified Omori model are updated on a daily basis using Bayesian updating and based on the data provided by the ongoing aftershock sequence based on the methodology originally proposed by Jalayer et al. (2011). The Bayesian updating is used also to provide sequence-based parameter estimates for a given ground motion prediction model, i.e. the aftershock events in an ongoing sequence are exploited in order to update in an adaptive manner the parameters of an existing ground motion prediction model. As a numerical example, the mean daily rates of exceeding specific spectral acceleration values are estimated adaptively for the L'Aquila 2009

  9. Near real-time aftershock hazard maps for earthquakes

    NASA Astrophysics Data System (ADS)

    McCloskey, J.; Nalbant, S. S.

    2009-04-01

    Stress interaction modelling is routinely used to explain the spatial relationships between earthquakes and their aftershocks. On 28 October 2008 a M6.4 earthquake occurred near the Pakistan-Afghanistan border killing several hundred and causing widespread devastation. A second M6.4 event occurred 12 hours later 20km to the south east. By making some well supported assumptions concerning the source event and the geometry of any likely triggered event it was possible to map those areas most likely to experience further activity. Using Google earth, it would further have been possible to identify particular settlements in the source area which were particularly at risk and to publish their locations globally within about 3 hours of the first earthquake. Such actions could have significantly focused the initial emergency response management. We argue for routine prospective testing of such forecasts and dialogue between social and physical scientists and emergency response professionals around the practical application of these techniques.

  10. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  11. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  12. Real-time forecasting of the April 11, 2012 Sumatra tsunami

    USGS Publications Warehouse

    Wang, Dailin; Becker, Nathan C.; Walsh, David; Fryer, Gerard J.; Weinstein, Stuart A.; McCreery, Charles S.; ,

    2012-01-01

    The April 11, 2012, magnitude 8.6 earthquake off the northern coast of Sumatra generated a tsunami that was recorded at sea-level stations as far as 4800 km from the epicenter and at four ocean bottom pressure sensors (DARTs) in the Indian Ocean. The governments of India, Indonesia, Sri Lanka, Thailand, and Maldives issued tsunami warnings for their coastlines. The United States' Pacific Tsunami Warning Center (PTWC) issued an Indian Ocean-wide Tsunami Watch Bulletin in its role as an Interim Service Provider for the region. Using an experimental real-time tsunami forecast model (RIFT), PTWC produced a series of tsunami forecasts during the event that were based on rapidly derived earthquake parameters, including initial location and Mwp magnitude estimates and the W-phase centroid moment tensor solutions (W-phase CMTs) obtained at PTWC and at the U. S. Geological Survey (USGS). We discuss the real-time forecast methodology and how successive, real-time tsunami forecasts using the latest W-phase CMT solutions improved the accuracy of the forecast.

  13. A radon-thoron isotope pair as a reliable earthquake precursor

    PubMed Central

    Hwa Oh, Yong; Kim, Guebuem

    2015-01-01

    Abnormal increases in radon (222Rn, half-life = 3.82 days) activity have occasionally been observed in underground environments before major earthquakes. However, 222Rn alone could not be used to forecast earthquakes since it can also be increased due to diffusive inputs over its lifetime. Here, we show that a very short-lived isotope, thoron (220Rn, half-life = 55.6 s; mean life = 80 s), in a cave can record earthquake signals without interference from other environmental effects. We monitored 220Rn together with 222Rn in air of a limestone-cave in Korea for one year. Unusually large 220Rn peaks were observed only in February 2011, preceding the 2011 M9.0 Tohoku-Oki Earthquake, Japan, while large 222Rn peaks were observed in both February 2011 and the summer. Based on our analyses, we suggest that the anomalous peaks of 222Rn and 220Rn activities observed in February were precursory signals related to the Tohoku-Oki Earthquake. Thus, the 220Rn-222Rn combined isotope pair method can present new opportunities for earthquake forecasting if the technique is extensively employed in earthquake monitoring networks around the world. PMID:26269105

  14. Earthquake outlook for the San Francisco Bay region 2014–2043

    USGS Publications Warehouse

    Aagaard, Brad T.; Blair, James Luke; Boatwright, John; Garcia, Susan H.; Harris, Ruth A.; Michael, Andrew J.; Schwartz, David P.; DiLeo, Jeanne S.; Jacques, Kate; Donlin, Carolyn

    2016-06-13

    Using information from recent earthquakes, improved mapping of active faults, and a new model for estimating earthquake probabilities, the 2014 Working Group on California Earthquake Probabilities updated the 30-year earthquake forecast for California. They concluded that there is a 72 percent probability (or likelihood) of at least one earthquake of magnitude 6.7 or greater striking somewhere in the San Francisco Bay region before 2043. Earthquakes this large are capable of causing widespread damage; therefore, communities in the region should take simple steps to help reduce injuries, damage, and disruption, as well as accelerate recovery from these earthquakes.

  15. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  16. Time-dependent neo-deterministic seismic hazard scenarios for the 2016 Central Italy earthquakes sequence

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Kossobokov, Vladimir; Romashkova, Leontina; Panza, Giuliano F.

    2017-04-01

    Predicting earthquakes and related ground shaking is widely recognized among the most challenging scientific problems, both for societal relevance and intrinsic complexity of the problem. The development of reliable forecasting tools requires their rigorous formalization and testing, first in retrospect, and then in an experimental real-time mode, which imply a careful application of statistics to data sets of limited size and different accuracy. Accordingly, the operational issues of prospective validation and use of time-dependent neo-deterministic seismic hazard scenarios are discussed, reviewing the results in their application in Italy and surroundings. Long-term practice and results obtained for the Italian territory in about two decades of rigorous prospective testing, support the feasibility of earthquake forecasting based on the analysis of seismicity patterns at the intermediate-term middle-range scale. Italy is the only country worldwide where two independent, globally tested, algorithms are simultaneously applied, namely CN and M8S, which permit to deal with multiple sets of seismic precursors to allow for a diagnosis of the intervals of time when a strong event is likely to occur inside a given region. Based on routinely updated space-time information provided by CN and M8S forecasts, an integrated procedure has been developed that allows for the definition of time-dependent seismic hazard scenarios, through the realistic modeling of ground motion by the neo-deterministic approach (NDSHA). This scenario-based methodology permits to construct, both at regional and local scale, scenarios of ground motion for the time interval when a strong event is likely to occur within the alerted areas. CN and M8S predictions, as well as the related time-dependent ground motion scenarios associated with the alarmed areas, are routinely updated since 2006. The issues and results from real-time testing of the integrated NDSHA scenarios are illustrated, with special

  17. Stigma in science: the case of earthquake prediction.

    PubMed

    Joffe, Helene; Rossetto, Tiziana; Bradley, Caroline; O'Connor, Cliodhna

    2018-01-01

    This paper explores how earthquake scientists conceptualise earthquake prediction, particularly given the conviction of six earthquake scientists for manslaughter (subsequently overturned) on 22 October 2012 for having given inappropriate advice to the public prior to the L'Aquila earthquake of 6 April 2009. In the first study of its kind, semi-structured interviews were conducted with 17 earthquake scientists and the transcribed interviews were analysed thematically. The scientists primarily denigrated earthquake prediction, showing strong emotive responses and distancing themselves from earthquake 'prediction' in favour of 'forecasting'. Earthquake prediction was regarded as impossible and harmful. The stigmatisation of the subject is discussed in the light of research on boundary work and stigma in science. The evaluation reveals how mitigation becomes the more favoured endeavour, creating a normative environment that disadvantages those who continue to pursue earthquake prediction research. Recommendations are made for communication with the public on earthquake risk, with a focus on how scientists portray uncertainty. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.

  18. Earthquake Hazard Assessment: an Independent Review

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2016-04-01

    Seismic hazard assessment (SHA), from term-less (probabilistic PSHA or deterministic DSHA) to time-dependent (t-DASH) including short-term earthquake forecast/prediction (StEF), is not an easy task that implies a delicate application of statistics to data of limited size and different accuracy. Regretfully, in many cases of SHA, t-DASH, and StEF, the claims of a high potential and efficiency of the methodology are based on a flawed application of statistics and hardly suitable for communication to decision makers. The necessity and possibility of applying the modified tools of Earthquake Prediction Strategies, in particular, the Error Diagram, introduced by G.M. Molchan in early 1990ies for evaluation of SHA, and the Seismic Roulette null-hypothesis as a measure of the alerted space, is evident, and such a testing must be done in advance claiming hazardous areas and/or times. The set of errors, i.e. the rates of failure and of the alerted space-time volume, compared to those obtained in the same number of random guess trials permits evaluating the SHA method effectiveness and determining the optimal choice of the parameters in regard to specified cost-benefit functions. These and other information obtained in such a testing may supply us with a realistic estimate of confidence in SHA results and related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. These basics of SHA evaluation are exemplified with a few cases of misleading "seismic hazard maps", "precursors", and "forecast/prediction methods".

  19. Associating an ionospheric parameter with major earthquake occurrence throughout the world

    NASA Astrophysics Data System (ADS)

    Ghosh, D.; Midya, S. K.

    2014-02-01

    With time, ionospheric variation analysis is gaining over lithospheric monitoring in serving precursors for earthquake forecast. The current paper highlights the association of major (Ms ≥ 6.0) and medium (4.0 ≤ Ms < 6.0) earthquake occurrences throughout the world in different ranges of the Ionospheric Earthquake Parameter (IEP) where `Ms' is earthquake magnitude on the Richter scale. From statistical and graphical analyses, it is concluded that the probability of earthquake occurrence is maximum when the defined parameter lies within the range of 0-75 (lower range). In the higher ranges, earthquake occurrence probability gradually decreases. A probable explanation is also suggested.

  20. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  1. Forecasting volcanic eruptions and other material failure phenomena: An evaluation of the failure forecast method

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.

    2011-08-01

    Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.

  2. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  3. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  4. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  5. New Measurements and Modeling Capability to Improve Real-time Forecast of Cascadia Tsunamis along U.S. West Coast

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Titov, V. V.; Bernard, E. N.; Spillane, M. C.

    2014-12-01

    The tragedies of 2004 Sumatra and 2011 Tohoku tsunamis exposed the limits of our knowledge in preparing for devastating tsunamis, especially in the near field. The 1,100-km coastline of the Pacific coast of North America has tectonic and geological settings similar to Sumatra and Japan. The geological records unambiguously show that the Cascadia fault had caused devastating tsunamis in the past and this geological process will cause tsunamis in the future. Existing observational instruments along the Cascadia Subduction Zone are capable of providing tsunami data within minutes of tsunami generation. However, this strategy requires separation of the tsunami signals from the overwhelming high-frequency seismic waves produced during a strong earthquake- a real technical challenge for existing operational tsunami observational network. A new-generation of nano-resolution pressure sensors can provide high temporal resolution of the earthquake and tsunami signals without loosing precision. The nano-resolution pressure sensor offers a state-of the-science ability to separate earthquake vibrations and other oceanic noise from tsunami waveforms, paving the way for accurate, early warnings of local tsunamis. This breakthrough underwater technology has been tested and verified for a couple of micro-tsunami events (Paros et al., 2011). Real-time forecast of Cascadia tsunamis is becoming a possibility with the development of nano-tsunameter technology. The present study provides an investigation on optimizing the placement of these new sensors so that the forecast time can be shortened.. The presentation will cover the optimization of an observational array to quickly detect and forecast a tsunami generated by a strong Cascadia earthquake, including short and long rupture scenarios. Lessons learned from the 2011 Tohoku tsunami will be examined to demonstrate how we can improve the local forecast using the new technology We expect this study to provide useful guideline for

  6. Assessment of GNSS-based height data of multiple ships for measuring and forecasting great tsunamis

    NASA Astrophysics Data System (ADS)

    Inazu, Daisuke; Waseda, Takuji; Hibiya, Toshiyuki; Ohta, Yusaku

    2016-12-01

    Ship height positioning by the Global Navigation Satellite System (GNSS) was investigated for measuring and forecasting great tsunamis. We first examined GNSS height-positioning data of a navigating vessel. If we use the kinematic precise point positioning (PPP) method, tsunamis greater than 10-1 m will be detected by ship height positioning. Based on Automatic Identification System (AIS) data, we found that tens of cargo ships and tankers are usually identified to navigate over the Nankai Trough, southwest Japan. We assumed that a future Nankai Trough great earthquake tsunami will be observed by the kinematic PPP height positioning of an AIS-derived ship distribution, and examined the tsunami forecast capability of the offshore tsunami measurements based on the PPP-based ship height. A method to estimate the initial tsunami height distribution using offshore tsunami observations was used for forecasting. Tsunami forecast tests were carried out using simulated tsunami data by the PPP-based ship height of 92 cargo ships/tankers, and by currently operating deep-sea pressure and Global Positioning System (GPS) buoy observations at 71 stations over the Nankai Trough. The forecast capability using the PPP-based height of the 92 ships was shown to be comparable to or better than that using the operating offshore observatories at the 71 stations. We suppose that, immediately after the occurrence of a great earthquake, stations receiving successive ship information (AIS data) along certain areas of the coast would fail to acquire ship data due to strong ground shaking, especially near the epicenter. Such a situation would significantly deteriorate the tsunami-forecast capability using ship data. On the other hand, operational real-time analysis of seismic/geodetic data would be carried out for estimating a tsunamigenic fault model. Incorporating the seismic/geodetic fault model estimation into the tsunami forecast above possibly compensates for the deteriorated forecast

  7. Test operation of a real-time tsunami inundation forecast system using actual data observed by S-net

    NASA Astrophysics Data System (ADS)

    Suzuki, W.; Yamamoto, N.; Miyoshi, T.; Aoi, S.

    2017-12-01

    If the tsunami inundation information can be rapidly and stably forecast before the large tsunami attacks, the information would have effectively people realize the impeding danger and necessity of evacuation. Toward that goal, we have developed a prototype system to perform the real-time tsunami inundation forecast for Chiba prefecture, eastern Japan, using off-shore ocean bottom pressure data observed by the seafloor observation network for earthquakes and tsunamis along the Japan Trench (S-net) (Aoi et al., 2015, AGU). Because tsunami inundation simulation requires a large computation cost, we employ a database approach searching the pre-calculated tsunami scenarios that reasonably explain the observed S-net pressure data based on the multi-index method (Yamamoto et al., 2016, EPS). The scenario search is regularly repeated, not triggered by the occurrence of the tsunami event, and the forecast information is generated from the selected scenarios that meet the criterion. Test operation of the prototype system using the actual observation data started in April, 2017 and the performance and behavior of the system during non-tsunami event periods have been examined. It is found that the treatment of the noises affecting the observed data is the main issue to be solved toward the improvement of the system. Even if the observed pressure data are filtered to extract the tsunami signals, the noises in ordinary times or unusually large noises like high ocean waves due to storm affect the comparison between the observed and scenario data. Due to the noises, the tsunami scenarios are selected and the tsunami is forecast although any tsunami event does not actually occur. In most cases, the selected scenarios due to the noises have the fault models in the region along the Kurile or Izu-Bonin Trenches, far from the S-net region, or the fault models below the land. Based on the parallel operation of the forecast system with a different scenario search condition and

  8. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  9. Dynamic evaluation of seismic hazard and risks based on the Unified Scaling Law for Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Nekrasova, A.

    2016-12-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A + B•(6 - M) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L, A characterizes the average annual rate of strong (M = 6) earthquakes, B determines the balance between magnitude ranges, and C estimates the fractal dimension of seismic locus in projection to the Earth surface. The parameters A, B, and C of USLE are used to assess, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity or paleo data), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures. The hazard maps for a given territory change dramatically, when the methodology is applied to a certain size moving time window, e.g. about a decade long for an intermediate-term regional assessment or exponentially increasing intervals for a daily local strong aftershock forecasting. The of dynamical seismic hazard and risks assessment is illustrated by applications to the territory of Greater Caucasus and Crimea and the two-year series of aftershocks of the 11 October 2008 Kurchaloy, Chechnya earthquake which case-history appears to be encouraging for further systematic testing as potential short-term forecasting tool.

  10. Testing efficacy of monthly forecast application in agrometeorology: Winter wheat phenology dynamic

    NASA Astrophysics Data System (ADS)

    Lalic, B.; Jankovic, D.; Dekic, Lj; Eitzinger, J.; Firanj Sremac, A.

    2017-02-01

    Use of monthly weather forecast as input meteorological data for agrometeorological forecasting, crop modelling and plant protection can foster promising applications in agricultural production. Operational use of monthly or seasonal weather forecast can help farmers to optimize field operations (fertilizing, irrigation) and protection measures against plant diseases and pests by taking full advantage of monthly forecast information in predicting plant development, pest and disease risks and yield potentials few weeks in advance. It can help producers to obtain stable or higher yield with the same inputs and to minimise losses caused by weather. In Central and South-Eastern Europe ongoing climate change lead to shifts of crops phenology dynamics (i.e. in Serbia 4-8 weeks earlier in 2016 than in previous years) and brings this subject in the front of agronomy science and practice. Objective of this study is to test efficacy of monthly forecast in predicting phenology dynamics of different winter wheat varieties, using phenological model developed by Forecasting and Warning Service of Serbia in plant protection. For that purpose, historical monthly forecast for four months (March 1, 2005 - June 30, 2005) was assimilated from ECMWF MARS archive for 50 ensemble members and control run. Impact of different agroecological conditions is tested by using observed and forecasted data for two locations - Rimski Sancevi (Serbia) and Groß-Enzersdorf (Austria).

  11. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous

  12. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  13. Real-Time Earthquake Analysis for Disaster Mitigation (READI) Network

    NASA Astrophysics Data System (ADS)

    Bock, Y.

    2014-12-01

    Real-time GNSS networks are making a significant impact on our ability to forecast, assess, and mitigate the effects of geological hazards. I describe the activities of the Real-time Earthquake Analysis for Disaster Mitigation (READI) working group. The group leverages 600+ real-time GPS stations in western North America operated by UNAVCO (PBO network), Central Washington University (PANGA), US Geological Survey & Scripps Institution of Oceanography (SCIGN project), UC Berkeley & US Geological Survey (BARD network), and the Pacific Geosciences Centre (WCDA project). Our goal is to demonstrate an earthquake and tsunami early warning system for western North America. Rapid response is particularly important for those coastal communities that are in the near-source region of large earthquakes and may have only minutes of warning time, and who today are not adequately covered by existing seismic and basin-wide ocean-buoy monitoring systems. The READI working group is performing comparisons of independent real time analyses of 1 Hz GPS data for station displacements and is participating in government-sponsored earthquake and tsunami exercises in the Western U.S. I describe a prototype seismogeodetic system using a cluster of southern California stations that includes GNSS tracking and collocation with MEMS accelerometers for real-time estimation of seismic velocity and displacement waveforms, which has advantages for improved earthquake early warning and tsunami forecasts compared to seismic-only or GPS-only methods. The READI working group's ultimate goal is to participate in an Indo-Pacific Tsunami early warning system that utilizes GNSS real-time displacements and ionospheric measurements along with seismic, near-shore buoys and ocean-bottom pressure sensors, where available, to rapidly estimate magnitude and finite fault slip models for large earthquakes, and then forecast tsunami source, energy scale, geographic extent, inundation and runup. This will require

  14. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  15. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  16. A hypothesis for delayed dynamic earthquake triggering

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    It's uncertain whether more near-field earthquakes are triggered by static or dynamic stress changes. This ratio matters because static earthquake interactions are increasingly incorporated into probabilistic forecasts. Recent studies were unable to demonstrate all predictions from the static-stress-change hypothesis, particularly seismicity rate reductions. However, current dynamic stress change hypotheses do not explain delayed earthquake triggering and Omori's law. Here I show numerically that if seismic waves can alter some frictional contacts in neighboring fault zones, then dynamic triggering might cause delayed triggering and an Omori-law response. The hypothesis depends on faults following a rate/state friction law, and on seismic waves changing the mean critical slip distance (Dc) at nucleation zones.

  17. Probing magma reservoirs to improve volcano forecasts

    USGS Publications Warehouse

    Lowenstern, Jacob B.; Sisson, Thomas W.; Hurwitz, Shaul

    2017-01-01

    When it comes to forecasting eruptions, volcano observatories rely mostly on real-time signals from earthquakes, ground deformation, and gas discharge, combined with probabilistic assessments based on past behavior [Sparks and Cashman, 2017]. There is comparatively less reliance on geophysical and petrological understanding of subsurface magma reservoirs.

  18. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  19. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  20. Collapse and Earthquake Swarm after North Korea's 3 September 2017 Nuclear Test

    NASA Astrophysics Data System (ADS)

    Tian, D.; Yao, J.; Wen, L.

    2017-12-01

    North Korea's 3 September 2017 nuclear test was followed by a series of small seismic events, with the first one occurring about eight-and-a-half minutes after the nuclear test, two on 23 September 2017, and one on 12 October 2017. While the characteristics of these seismic events would carry crucial information about current geological state and environmental condition of the nuclear test site and help evaluate the geological and environmental safety of the test site should any future tests be performed there, the precise locations and nature of these seismic events are unknown. In this study, we collect all available seismic waveforms of these five seismic events from China Earthquake Networks Center, F-net, Hi-net, Global Seismographic Network, Japan Meteorological Agency Seismic Network, and Korea National Seismograph Network. We are able to find high-quality seismic data that constitute good azimuth coverage for high-precision determination of their relative locations and detailed analysis of their source characteristics. Our study reveals that the seismic event eight-and-a-half minutes after the nuclear test is an onsite collapse toward the nuclear test center, while the later events are an earthquake swarm occurring in similar locations. The onsite collapse calls for continued close monitoring of any leaks of radioactive materials from the nuclear test site. The occurrence of the collapse should deem the underground infrastructure beneath mountain Mantap not be used for any future nuclear tests. Given the history of the nuclear tests North Korea performed beneath this mountain, a nuclear test of a similar yield would produce collapses in an even larger scale creating an environmental catastrophe. The triggered earthquake swarm indicates that North Korea's past tests have altered the tectonic stress in the region to the extent that previously inactive tectonic faults in the region have reached their state of critical failure. Any further disturbance from a

  1. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  2. Testing for ontological errors in probabilistic forecasting models of natural systems

    PubMed Central

    Marzocchi, Warner; Jordan, Thomas H.

    2014-01-01

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  3. Performance test of an automated moment tensor determination system for the future "Tokai" earthquake

    NASA Astrophysics Data System (ADS)

    Fukuyama, E.; Dreger, D. S.

    2000-06-01

    We have investigated how the automated moment tensor determination (AMTD) system using the FREESIA/KIBAN broadband network is likely to behave during a future large earthquake. Because we do not have enough experience with a large (M >8) nearby earthquake, we computed synthetic waveforms for such an event by assuming the geometrical configuration of the anticipated Tokai earthquake and several fault rupture scenarios. Using this synthetic data set, we examined the behavior of the AMTD system to learn how to prepare for such an event. For our synthetic Tokai event data we assume its focal mechanism, fault dimension, and scalar seismic moment. We also assume a circular rupture propagation with constant rupture velocity and dislocation rise time. Both uniform and heterogeneous slip models are tested. The results show that performance depends on both the hypocentral location (i.e. unilateral vs. bilateral) and the degree of heterogeneity of slip. In the tests that we have performed the rupture directivity appears to be more important than slip heterogeneity. We find that for such large earthquakes it is necessary to use stations at distances greater than 600 km and frequencies between 0.005 to 0.02 Hz to maintain a point-source assumption and to recover the full scalar seismic moment and radiation pattern. In order to confirm the result of the synthetic test, we have analyzed the 1993 Hokkaido Nansei-oki (MJ7.8) and the 1995 Kobe (MJ7.2) earthquakes by using observed broadband waveforms. For the Kobe earthquake we successfully recovered the moment tensor by using the routinely used frequency band (0.01-0.05 Hz displacements). However, we failed to estimate a correct solution for the Hokkaido Nansei-oki earthquake by using the same routine frequency band. In this case, we had to use the frequencies between 0.005 to 0.02 Hz to recover the moment tensor, confirming the validity of the synthetic test result for the Tokai earthquake.

  4. E-DECIDER: Using Earth Science Data and Modeling Tools to Develop Decision Support for Earthquake Disaster Response

    NASA Astrophysics Data System (ADS)

    Glasscoe, Margaret T.; Wang, Jun; Pierce, Marlon E.; Yoder, Mark R.; Parker, Jay W.; Burl, Michael C.; Stough, Timothy M.; Granat, Robert A.; Donnellan, Andrea; Rundle, John B.; Ma, Yu; Bawden, Gerald W.; Yuen, Karen

    2015-08-01

    Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). This in turn is delivered through standards-compliant web services for desktop and hand-held devices.

  5. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  6. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  7. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  8. Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability.

    PubMed

    Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco

    2018-03-28

    Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.

  9. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    USGS Publications Warehouse

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  10. The establisment of an achievement test for determination of primary teachers’ knowledge level of earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aydin, Süleyman, E-mail: yupul@hotmail.com; Haşiloğlu, M. Akif, E-mail: mehmet.hasiloglu@hotmail.com; Kunduraci, Ayşe, E-mail: ayse-kndrc@hotmail.com

    In this study it was aimed to improve an academic achievement test to establish the students’ knowledge about the earthquake and the ways of protection from earthquakes. In the method of this study, the steps that Webb (1994) was created to improve an academic achievement test for a unit were followed. In the developmental process of multiple choice test having 25 questions, was prepared to measure the pre-service teachers’ knowledge levels about the earthquake and the ways of protection from earthquakes. The multiple choice test was presented to view of six academics (one of them was from geographic field andmore » five of them were science educator) and two expert teachers in science Prepared test was applied to 93 pre-service teachers studying in elementary education department in 2014-2015 academic years. As a result of validity and reliability of the study, the test was composed of 20 items. As a result of these applications, Pearson Moments Multiplication half-reliability coefficient was found to be 0.94. When this value is adjusted according to Spearman Brown reliability coefficient the reliability coefficient was set at 0.97.« less

  11. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    USGS Publications Warehouse

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  12. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis.

    PubMed

    Mohammed, Emad A; Naugler, Christopher

    2017-01-01

    Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.

  13. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis

    PubMed Central

    Mohammed, Emad A.; Naugler, Christopher

    2017-01-01

    Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996

  14. Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area

    NASA Astrophysics Data System (ADS)

    Alarifi, A. S.; Alarifi, N. S.

    2009-12-01

    Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis

  15. Tracking signal test to monitor an intelligent time series forecasting model

    NASA Astrophysics Data System (ADS)

    Deng, Yan; Jaraiedi, Majid; Iskander, Wafik H.

    2004-03-01

    Extensive research has been conducted on the subject of Intelligent Time Series forecasting, including many variations on the use of neural networks. However, investigation of model adequacy over time, after the training processes is completed, remains to be fully explored. In this paper we demonstrate a how a smoothed error tracking signals test can be incorporated into a neuro-fuzzy model to monitor the forecasting process and as a statistical measure for keeping the forecasting model up-to-date. The proposed monitoring procedure is effective in the detection of nonrandom changes, due to model inadequacy or lack of unbiasedness in the estimation of model parameters and deviations from the existing patterns. This powerful detection device will result in improved forecast accuracy in the long run. An example data set has been used to demonstrate the application of the proposed method.

  16. The 1999 Mw 7.1 Hector Mine, California, earthquake: A test of the stress shadow hypothesis?

    USGS Publications Warehouse

    Harris, R.A.; Simpson, R.W.

    2002-01-01

    We test the stress shadow hypothesis for large earthquake interactions by examining the relationship between two large earthquakes that occurred in the Mojave Desert of southern California, the 1992 Mw 7.3 Landers and 1999 Mw 7.1 Hector Mine earthquakes. We want to determine if the 1999 Hector Mine earthquake occurred at a location where the Coulomb stress was increased (earthquake advance, stress trigger) or decreased (earthquake delay, stress shadow) by the previous large earthquake. Using four models of the Landers rupture and a range of possible hypocentral planes for the Hector Mine earthquake, we discover that most scenarios yield a Landers-induced relaxation (stress shadow) on the Hector Mine hypocentral plane. Although this result would seem to weigh against the stress shadow hypothesis, the results become considerably more uncertain when the effects of a nearby Landers aftershock, the 1992 ML 5.4 Pisgah earthquake, are taken into account. We calculate the combined static Coulomb stress changes due to the Landers and Pisgah earthquakes to range from -0.3 to +0.3 MPa (- 3 to +3 bars) at the possible Hector Mine hypocenters, depending on choice of rupture model and hypocenter. These varied results imply that the Hector Mine earthquake does not provide a good test of the stress shadow hypothesis for large earthquake interactions. We use a simple approach, that of static dislocations in an elastic half-space, yet we still obtain a wide range of both negative and positive Coulomb stress changes. Our findings serve as a caution that more complex models purporting to explain the triggering or shadowing relationship between the 1992 Landers and 1999 Hector Mine earthquakes need to also consider the parametric and geometric uncertainties raised here.

  17. Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.

    2018-02-01

    Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.

  18. Strong motions observed by K-NET and KiK-net during the 2016 Kumamoto earthquake sequence

    NASA Astrophysics Data System (ADS)

    Suzuki, Wataru; Aoi, Shin; Kunugi, Takashi; Kubo, Hisahiko; Morikawa, Nobuyuki; Nakamura, Hiromitsu; Kimura, Takeshi; Fujiwara, Hiroyuki

    2017-01-01

    The nationwide strong-motion seismograph network of K-NET and KiK-net in Japan successfully recorded the strong ground motions of the 2016 Kumamoto earthquake sequence, which show the several notable characteristics. For the first large earthquake with a JMA magnitude of 6.5 (21:26, April 14, 2016, JST), the large strong motions are concentrated near the epicenter and the strong-motion attenuations are well predicted by the empirical relation for crustal earthquakes with a moment magnitude of 6.1. For the largest earthquake of the sequence with a JMA magnitude of 7.3 (01:25, April 16, 2016, JST), the large peak ground accelerations and velocities extend from the epicentral area to the northeast direction. The attenuation feature of peak ground accelerations generally follows the empirical relation, whereas that for velocities deviates from the empirical relation for stations with the epicentral distance of greater than 200 km, which can be attributed to the large Love wave having a dominant period around 10 s. The large accelerations were observed at stations even in Oita region, more than 70 km northeast from the epicenter. They are attributed to the local induced earthquake in Oita region, whose moment magnitude is estimated to be 5.5 by matching the amplitudes of the corresponding phases with the empirical attenuation relation. The real-time strong-motion observation has a potential for contributing to the mitigation of the ongoing earthquake disasters. We test a methodology to forecast the regions to be exposed to the large shaking in real time, which has been developed based on the fact that the neighboring stations are already shaken, for the largest event of the Kumamoto earthquakes, and demonstrate that it is simple but effective to quickly make warning. We also shows that the interpolation of the strong motions in real time is feasible, which will be utilized for the real-time forecast of ground motions based on the observed shakings.[Figure not available

  19. USGS Imagery Applications During Disaster Response After Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Brooks, B. A.; Glennie, C. L.; Finnegan, D. C.

    2015-12-01

    It is not only important to rapidly characterize surface fault rupture and related ground deformation after an earthquake, but also to repeatedly make observations following an event to forecast fault afterslip. These data may also be used by other agencies to monitor progress on damage repairs and restoration efforts by emergency responders and the public. Related requirements include repeatedly obtaining reference or baseline imagery before a major disaster occurs, as well as maintaining careful geodetic control on all imagery in a time series so that absolute georeferencing may be applied to the image stack through time. In addition, repeated post-event imagery acquisition is required, generally at a higher repetition rate soon after the event, then scaled back to less frequent acquisitions with time, to capture phenomena (such as fault afterslip) that are known to have rates that decrease rapidly with time. For example, lidar observations acquired before and after the South Napa earthquake of 2014, used in our extensive post-processing work that was funded primarily by FEMA, aided in the accurate forecasting of fault afterslip. Lidar was used to independently validate and verify the official USGS afterslip forecast. In order to keep pace with rapidly evolving technology, a development pipeline must be established and maintained to continually test and incorporate new sensors, while adapting these new components to the existing platform and linking them to the existing base software system, and then sequentially testing the system as it evolves. Improvements in system performance by incremental upgrades of system components and software are essential. Improving calibration parameters and thereby progressively eliminating artifacts requires ongoing testing, research and development. To improve the system, we have formed an interdisciplinary team with common interests and diverse sources of support. We share expertise and leverage funding while effectively and

  20. Earthquake watch to be discussed

    NASA Astrophysics Data System (ADS)

    Katzoff, Judith A.

    The most intensive earthquake monitoring program ever mounted in this country is going on near Parkfield, Calif., about midway between Los Angeles and San Francisco on the San Andreas fault. Although no particularly large or destructive quake is feared in Parkfield, the regularity with which earthquakes have occurred there in the past makes the site unique. Since the next quake has been forecast for 1988 (±5 years), seismologists have decided to blanket the area with data-gathering equipment in hopes of having front-row seats for the expected seismic show. The studies in Parkfield will be the topic of an all-day session sponsored by the Seismology Section on Friday, December 13, at the AGU Fall Meeting in San Francisco, Calif.

  1. Forecasting California's earthquakes: What can we expect in the next 30 years?

    USGS Publications Warehouse

    Field, Edward H.; Milner, Kevin R.; ,

    2008-01-01

    In a new comprehensive study, scientists have determined that the chance of having one or more magnitude 6.7 or larger earthquakes in the California area over the next 30 years is greater than 99%. Such quakes can be deadly, as shown by the 1989 magnitude 6.9 Loma Prieta and the 1994 magnitude 6.7 Northridge earthquakes. The likelihood of at least one even more powerful quake of magnitude 7.5 or greater in the next 30 years is 46%?such a quake is most likely to occur in the southern half of the State. Building codes, earthquake insurance, and emergency planning will be affected by these new results, which highlight the urgency to prepare now for the powerful quakes that are inevitable in California?s future.

  2. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    NASA Astrophysics Data System (ADS)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  3. E-DECIDER Decision Support Gateway For Earthquake Disaster Response

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Stough, T. M.; Parker, J. W.; Burl, M. C.; Donnellan, A.; Blom, R. G.; Pierce, M. E.; Wang, J.; Ma, Y.; Rundle, J. B.; Yoder, M. R.

    2013-12-01

    Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing capabilities for decision-making utilizing remote sensing data and modeling software in order to provide decision support for earthquake disaster management and response. E-DECIDER incorporates earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project in order to produce standards-compliant map data products to aid in decision-making following an earthquake. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools, help provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). E-DECIDER utilizes a service-based GIS model for its cyber-infrastructure in order to produce standards-compliant products for different user types with multiple service protocols (such as KML, WMS, WFS, and WCS). The goal is to make complex GIS processing and domain-specific analysis tools more accessible to general users through software services as well as provide system sustainability through infrastructure services. The system comprises several components, which include: a GeoServer for thematic mapping and data distribution, a geospatial database for storage and spatial analysis, web service APIs, including simple-to-use REST APIs for complex GIS functionalities, and geoprocessing tools including python scripts to produce standards-compliant data products. These are then served to the E-DECIDER decision support gateway (http://e-decider.org), the E-DECIDER mobile interface, and to the Department of Homeland Security decision support middleware UICDS (Unified Incident Command and Decision Support). The E-DECIDER decision support gateway features a web interface that

  4. Sources of information for tsunami forecasting in New Zealand

    NASA Astrophysics Data System (ADS)

    Barberopoulou, A.; Ristau, J. P.; D'Anastasio, E.; Wang, X.

    2013-12-01

    Tsunami science has evolved considerably in the last two decades due to technological advancements which also helped push for better numerical modelling of the tsunami phases (generation to inundation). The deployment of DART buoys has also been a considerable milestone in tsunami forecasting. Tsunami forecasting is one of the parts that tsunami modelling feeds into and is related to response, preparedness and planning. Usually tsunami forecasting refers to short-term forecasting that takes place in real-time after a tsunami has or appears to have been generated. In this report we refer to all types of forecasting (short-term or long-term) related to work in advance of a tsunami impacting a coastline that would help in response, planning or preparedness. We look at the standard types of data (seismic, GPS, water level) that are available in New Zealand for tsunami forecasting, how they are currently being used, other ways to use these data and provide recommendations for better utilisation. The main findings are: -Current investigations of the use of seismic parameters quickly obtained after an earthquake, have potential to provide critical information about the tsunamigenic potential of earthquakes. Further analysis of the most promising methods should be undertaken to determine a path to full implementation. -Network communication of the largest part of the GPS network is not currently at a stage that can provide sufficient data early enough for tsunami warning. It is believed that it has potential, but changes including data transmission improvements may have to happen before real-time processing oriented to tsunami early warning is implemented on the data that is currently provided. -Tide gauge data is currently under-utilised for tsunami forecasting. Spectral analysis, modal analysis based on identified modes and arrival times extracted from the records can be useful in forecasting. -The current study is by no means exhaustive of the ways the different types

  5. Earthquake Rupture Forecast of M>= 6 for the Corinth Rift System

    NASA Astrophysics Data System (ADS)

    Scotti, O.; Boiselet, A.; Lyon-Caen, H.; Albini, P.; Bernard, P.; Briole, P.; Ford, M.; Lambotte, S.; Matrullo, E.; Rovida, A.; Satriano, C.

    2014-12-01

    Fourteen years of multidisciplinary observations and data collection in the Western Corinth Rift (WCR) near-fault observatory have been recently synthesized (Boiselet, Ph.D. 2014) for the purpose of providing earthquake rupture forecasts (ERF) of M>=6 in WCR. The main contribution of this work consisted in paving the road towards the development of a "community-based" fault model reflecting the level of knowledge gathered thus far by the WCR working group. The most relevant available data used for this exercise are: - onshore/offshore fault traces, based on geological and high-resolution seismics, revealing a complex network of E-W striking, ~10 km long fault segments; microseismicity recorded by a dense network ( > 60000 events; 1.5=5 19th century events and a few paleoseismological investigations, allowing to consider time-dependent ERF. B-value estimates are found to be catalogue-dependent (WCR, homogenized NOA+Thessaloniki, SHARE), which may call for a potential break in scaling relationship. Furthermore, observed discrepancies between seismicity rates assumed for the modeled faults and those expected from GPS deformation rates call for the presence of aseismic deformation. Uncertainty in the ERF resulting from the lack of precise knowledge concerning both, fault geometries and seismic slip rates, is quantified through a logic tree exploration. Median and precentile predictions are then compared to ERF assuming a uniform seismicity rate in the WCR region. The issues raised by this work will be discussed in the light of seismic hazard assessment.

  6. A review on remotely sensed land surface temperature anomaly as an earthquake precursor

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anshuman; Singh, Shaktiman; Sam, Lydia; Joshi, P. K.; Bhardwaj, Akanksha; Martín-Torres, F. Javier; Kumar, Rajesh

    2017-12-01

    The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.

  7. Earthquake chemical precursors in groundwater: a review

    NASA Astrophysics Data System (ADS)

    Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.

    2018-03-01

    We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

  8. Tidal controls on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, S.; Yabe, S.; Tanaka, Y.

    2016-12-01

    The possibility that tidal stresses can trigger earthquakes is a long-standing issue in seismology. Except in some special cases, a causal relationship between seismicity and the phase of tidal stress has been rejected on the basis of studies using many small events. However, recently discovered deep tectonic tremors are highly sensitive to tidal stress levels, with the relationship being governed by a nonlinear law according to which the tremor rate increases exponentially with increasing stress; thus, slow deformation (and the probability of earthquakes) may be enhanced during periods of large tidal stress. Here, we show the influence of tidal stress on seismicity by calculating histories of tidal shear stress during the 2-week period before earthquakes. Very large earthquakes tend to occur near the time of maximum tidal stress, but this tendency is not obvious for small earthquakes. Rather, we found that tidal stress controls the earthquake size-frequency statistics; i.e., the fraction of large events increases (i.e. the b-value of the Gutenberg-Richter relation decreases) as the tidal shear stress increases. This correlation is apparent in data from the global catalog and in relatively homogeneous regional catalogues of earthquakes in Japan. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. Our findings indicate that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. This finding has clear implications for probabilistic earthquake forecasting.

  9. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  10. Non-seismic tsunamis: filling the forecast gap

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Titov, V. V.; Spillane, M. C.

    2015-12-01

    Earthquakes are the generation mechanism in over 85% of tsunamis. However, non-seismic tsunamis, including those generated by meteorological events, landslides, volcanoes, and asteroid impacts, can inundate significant area and have a large far-field effect. The current National Oceanographic and Atmospheric Administration (NOAA) tsunami forecast system falls short in detecting these phenomena. This study attempts to classify the range of effects possible from these non-seismic threats, and to investigate detection methods appropriate for use in a forecast system. Typical observation platforms are assessed, including DART bottom pressure recorders and tide gauges. Other detection paths include atmospheric pressure anomaly algorithms for detecting meteotsunamis and the early identification of asteroids large enough to produce a regional hazard. Real-time assessment of observations for forecast use can provide guidance to mitigate the effects of a non-seismic tsunami.

  11. Seismic‐hazard forecast for 2016 including induced and natural earthquakes in the central and eastern United States

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.

    2016-01-01

    The U.S. Geological Survey (USGS) has produced a one‐year (2016) probabilistic seismic‐hazard assessment for the central and eastern United States (CEUS) that includes contributions from both induced and natural earthquakes that are constructed with probabilistic methods using alternative data and inputs. This hazard assessment builds on our 2016 final model (Petersen et al., 2016) by adding sensitivity studies, illustrating hazard in new ways, incorporating new population data, and discussing potential improvements. The model considers short‐term seismic activity rates (primarily 2014–2015) and assumes that the activity rates will remain stationary over short time intervals. The final model considers different ways of categorizing induced and natural earthquakes by incorporating two equally weighted earthquake rate submodels that are composed of alternative earthquake inputs for catalog duration, smoothing parameters, maximum magnitudes, and ground‐motion models. These alternatives represent uncertainties on how we calculate earthquake occurrence and the diversity of opinion within the science community. In this article, we also test sensitivity to the minimum moment magnitude between M 4 and M 4.7 and the choice of applying a declustered catalog with b=1.0 rather than the full catalog with b=1.3. We incorporate two earthquake rate submodels: in the informed submodel we classify earthquakes as induced or natural, and in the adaptive submodel we do not differentiate. The alternative submodel hazard maps both depict high hazard and these are combined in the final model. Results depict several ground‐shaking measures as well as intensity and include maps showing a high‐hazard level (1% probability of exceedance in 1 year or greater). Ground motions reach 0.6g horizontal peak ground acceleration (PGA) in north‐central Oklahoma and southern Kansas, and about 0.2g PGA in the Raton basin of Colorado and New Mexico, in central Arkansas, and in

  12. Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting

    NASA Astrophysics Data System (ADS)

    Tong, Howell

    1995-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study

  13. Distribution and Characteristics of Repeating Earthquakes in Northern California

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.; Zechar, J. D.; Shaw, B. E.

    2012-12-01

    Repeating earthquakes are playing an increasingly important role in the study of fault processes and behavior, and have the potential to improve hazard assessment, earthquake forecast, and seismic monitoring capabilities. These events rupture the same fault patch repeatedly, generating virtually identical seismograms. In California, repeating earthquakes have been found predominately along the creeping section of the central San Andreas Fault, where they are believed to represent failing asperities on an otherwise creeping fault. Here, we use the northern California double-difference catalog of 450,000 precisely located events (1984-2009) and associated database of 2 billion waveform cross-correlation measurements to systematically search for repeating earthquakes across various tectonic regions. An initial search for pairs of earthquakes with high-correlation coefficients and similar magnitudes resulted in 4,610 clusters including a total of over 26,000 earthquakes. A subsequent double-difference re-analysis of these clusters resulted in 1,879 sequences (8,640 events) where a common rupture area can be resolved to the precision of a few tens of meters or less. These repeating earthquake sequences (RES) include between 3 and 24 events with magnitudes up to ML=4. We compute precise relative magnitudes between events in each sequence from differential amplitude measurements. Differences between these and standard coda-duration magnitudes have a standard deviation of 0.09. The RES occur throughout northern California, but RES with 10 or more events (6%) only occur along the central San Andreas and Calaveras faults. We are establishing baseline characteristics for each sequence, such as recurrence intervals and their coefficient of variation (CV), in order to compare them across tectonic regions. CVs for these clusters range from 0.002 to 2.6, indicating a range of behavior between periodic occurrence (CV~0), random occurrence, and temporal clustering. 10% of the RES

  14. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  15. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  16. Data mining of atmospheric parameters associated with coastal earthquakes

    NASA Astrophysics Data System (ADS)

    Cervone, Guido

    Earthquakes are natural hazards that pose a serious threat to society and the environment. A single earthquake can claim thousands of lives, cause damages for billions of dollars, destroy natural landmarks and render large territories uninhabitable. Studying earthquakes and the processes that govern their occurrence, is of fundamental importance to protect lives, properties and the environment. Recent studies have shown that anomalous changes in land, ocean and atmospheric parameters occur prior to earthquakes. The present dissertation introduces an innovative methodology and its implementation to identify anomalous changes in atmospheric parameters associated with large coastal earthquakes. Possible geophysical mechanisms are discussed in view of the close interaction between the lithosphere, the hydrosphere and the atmosphere. The proposed methodology is a multi strategy data mining approach which combines wavelet transformations, evolutionary algorithms, and statistical analysis of atmospheric data to analyze possible precursory signals. One dimensional wavelet transformations and statistical tests are employed to identify significant singularities in the data, which may correspond to anomalous peaks due to the earthquake preparatory processes. Evolutionary algorithms and other localized search strategies are used to analyze the spatial and temporal continuity of the anomalies detected over a large area (about 2000 km2), to discriminate signals that are most likely associated with earthquakes from those due to other, mostly atmospheric, phenomena. Only statistically significant singularities occurring within a very short time of each other, and which tract a rigorous geometrical path related to the geological properties of the epicentral area, are considered to be associated with a seismic event. A program called CQuake was developed to implement and validate the proposed methodology. CQuake is a fully automated, real time semi-operational system, developed to

  17. Short-term forecasting of aftershock sequences, microseismicity and swarms inside the Corinth Gulf continental rift

    NASA Astrophysics Data System (ADS)

    Segou, Margarita

    2014-05-01

    Corinth Gulf (Central Greece) is the fastest continental rift in the world with extension rates 11-15 mm/yr with diverse seismic deformation including earthquakes with M greater than 6.0, several periods of increased microseismic activity, usually lasting few months and possibly related with fluid diffusion, and swarm episodes lasting few days. In this study I perform a retrospective forecast experiment between 1995-2012, focusing on the comparison between physics-based and statistical models for short term time classes. Even though Corinth gulf has been studied extensively in the past there is still today a debate whether earthquake activity is related with the existence of either a shallow dipping structure or steeply dipping normal faults. In the light of the above statement, two CRS realization are based on resolving Coulomb stress changes on specified receiver faults, expressing the aforementioned structural models, whereas the third CRS model uses optimally-oriented for failure planes. The CRS implementation accounts for stress changes following all major ruptures with M greater than 4.5 within the testing phase. I also estimate fault constitutive parameters from modeling the response to major earthquakes at the vicinity of the gulf (Aσ=0.2, stressing rate app. 0.02 bar/yr). The generic ETAS parameters are taken as the maximum likelihood estimates derived from the stochastic declustering of the modern seismicity catalog (1995-2012) with minimum triggering magnitude M2.5. I test whether the generic ETAS can efficiently describe the aftershock spatio-temporal clustering but also the evolution of swarm episodes and microseismicity. For the reason above, I implement likelihood tests to evaluate the forecasts for their spatial consistency and for the total amount of predicted versus observed events with M greater than 3.0 in 10-day time windows during three distinct evaluation phases; the first evaluation phase focuses on the Aigio 1995 aftershock sequence (15

  18. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  19. Assessing Lay Understanding of Common Presentations of Earthquake Hazard Information

    NASA Astrophysics Data System (ADS)

    Thompson, K. J.; Krantz, D. H.

    2010-12-01

    The Working Group on California Earthquake Probabilities (WGCEP) includes, in its introduction to earthquake rupture forecast maps, the assertion that "In daily living, people are used to making decisions based on probabilities -- from the flip of a coin (50% probability of heads) to weather forecasts (such as a 30% chance of rain) to the annual chance of being killed by lightning (about 0.0003%)." [3] However, psychology research identifies a large gap between lay and expert perception of risk for various hazards [2], and cognitive psychologists have shown in numerous studies [1,4-6] that people neglect, distort, misjudge, or misuse probabilities, even when given strong guidelines about the meaning of numerical or verbally stated probabilities [7]. The gap between lay and expert use of probability needs to be recognized more clearly by scientific organizations such as WGCEP. This study undertakes to determine how the lay public interprets earthquake hazard information, as presented in graphical map form by the Uniform California Earthquake Rupture Forecast (UCERF), compiled by the WGCEP and other bodies including the USGS and CGS. It also explores alternate ways of presenting hazard data, to determine which presentation format most effectively translates information from scientists to public. Participants both from California and from elsewhere in the United States are included, to determine whether familiarity -- either with the experience of an earthquake, or with the geography of the forecast area -- affects people's ability to interpret an earthquake hazards map. We hope that the comparisons between the interpretations by scientific experts and by different groups of laypeople will both enhance theoretical understanding of factors that affect information transmission and assist bodies such as the WGCEP in their laudable attempts to help people prepare themselves and their communities for possible natural hazards. [1] Kahneman, D & Tversky, A (1979). Prospect

  20. Journal of the Chinese Institute of Engineers. Special Issue: Commemoration of Chi-Chi Earthquake (II)

    NASA Astrophysics Data System (ADS)

    2002-09-01

    Contents include the following: Deep Electromagnetic Images of Seismogenic Zone of the Chi-Chi (Taiwan) Earthquake; New Techniques for Stress-Forecasting Earthquakes; Aspects of Characteristics of Near-Fault Ground Motions of the 1999 Chi-Chi (Taiwan) Earthquake; Liquefaction Damage and Related Remediation in Wufeng after the Chi-Chi Earthquake; Fines Content Effects on Liquefaction Potential Evaluation for Sites Liquefied during Chi-Chi Earthquake 1999; Damage Investigation and Liquefaction Potential Analysis of Gravelly Soil; Dynamic Characteristics of Soils in Yuan-Lin Liquefaction Area; A Preliminary Study of Earthquake Building Damage and Life Loss Due to the Chi-Chi Earthquake; Statistical Analyses of Relation between Mortality and Building Type in the 1999 Chi-Chi Earthquake; Development of an After Earthquake Disaster Shelter Evaluation Model; Posttraumatic Stress Reactions in Children and Adolescents One Year after the 1999 Taiwan Chi-Chi Earthquake; Changes or Not is the Question: the Meaning of Posttraumatic Stress Reactions One Year after the Taiwan Chi-Chi Earthquake.

  1. Evaluation of W Phase CMT Based PTWC Real-Time Tsunami Forecast Model Using DART Observations: Events of the Last Decade

    NASA Astrophysics Data System (ADS)

    Wang, D.; Becker, N. C.; Weinstein, S.; Duputel, Z.; Rivera, L. A.; Hayes, G. P.; Hirshorn, B. F.; Bouchard, R. H.; Mungov, G.

    2017-12-01

    The Pacific Tsunami Warning Center (PTWC) began forecasting tsunamis in real-time using source parameters derived from real-time Centroid Moment Tensor (CMT) solutions in 2009. Both the USGS and PTWC typically obtain W-Phase CMT solutions for large earthquakes less than 30 minutes after earthquake origin time. Within seconds, and often before waves reach the nearest deep ocean bottom pressure sensor (DARTs), PTWC then generates a regional tsunami propagation forecast using its linear shallow water model. The model is initialized by the sea surface deformation that mimics the seafloor deformation based on Okada's (1985) dislocation model of a rectangular fault with a uniform slip. The fault length and width are empirical functions of the seismic moment. How well did this simple model perform? The DART records provide a very valuable dataset for model validation. We examine tsunami events of the last decade with earthquake magnitudes ranging from 6.5 to 9.0 including some deep events for which tsunamis were not expected. Most of the forecast results were obtained during the events. We also include events from before the implementation of the WCMT method at USGS and PTWC, 2006-2009. For these events, WCMTs were computed retrospectively (Duputel et al. 2012). We also re-ran the model with a larger domain for some events to include far-field DARTs that recorded a tsunami with identical source parameters used during the events. We conclude that our model results, in terms of maximum wave amplitude, are mostly within a factor of two of the observed at DART stations, with an average error of less than 40% for most events, including the 2010 Maule and the 2011 Tohoku tsunamis. However, the simple fault model with a uniform slip is too simplistic for the Tohoku tsunami. We note model results are sensitive to centroid location and depth, especially if the earthquake is close to land or inland. For the 2016 M7.8 New Zealand earthquake the initial forecast underestimated the

  2. Probabilistic In Situ Stress Estimation and Forecasting using Sequential Data Assimilation

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Dinther, Y.; Kuensch, H. R.

    2017-12-01

    Our physical understanding and forecasting ability of earthquakes, and other solid Earth dynamic processes, is significantly hampered by limited indications on the evolving state of stress and strength on faults. Integrating observations and physics-based numerical modeling to quantitatively estimate this evolution of a fault's state is crucial. However, systematic attempts are limited and tenuous, especially in light of the scarcity and uncertainty of natural data and the difficulty of modelling the physics governing earthquakes. We adopt the statistical framework of sequential data assimilation - extensively developed for weather forecasting - to efficiently integrate observations and prior knowledge in a forward model, while acknowledging errors in both. To prove this concept we perform a perfect model test in a simplified subduction zone setup, where we assimilate synthetic noised data on velocities and stresses from a single location. Using an Ensemble Kalman Filter, these data and their errors are assimilated to update 150 ensemble members from a Partial Differential Equation-driven seismic cycle model. Probabilistic estimates of fault stress and dynamic strength evolution capture the truth exceptionally well. This is possible, because the sampled error covariance matrix contains prior information from the physics that relates velocities, stresses and pressure at the surface to those at the fault. During the analysis step, stress and strength distributions are thus reconstructed such that fault coupling can be updated to either inhibit or trigger events. In the subsequent forecast step the physical equations are solved to propagate the updated states forward in time and thus provide probabilistic information on the occurrence of the next event. At subsequent assimilation steps, the system's forecasting ability turns out to be significantly better than that of a periodic recurrence model (requiring an alarm 17% vs. 68% of the time). This thus provides distinct

  3. Leveraging geodetic data to reduce losses from earthquakes

    USGS Publications Warehouse

    Murray, Jessica R.; Roeloffs, Evelyn A.; Brooks, Benjamin A.; Langbein, John O.; Leith, William S.; Minson, Sarah E.; Svarc, Jerry L.; Thatcher, Wayne R.

    2018-04-23

    Seismic hazard assessments that are based on a variety of data and the best available science, coupled with rapid synthesis of real-time information from continuous monitoring networks to guide post-earthquake response, form a solid foundation for effective earthquake loss reduction. With this in mind, the Earthquake Hazards Program (EHP) of the U.S. Geological Survey (USGS) Natural Hazards Mission Area (NHMA) engages in a variety of undertakings, both established and emergent, in order to provide high quality products that enable stakeholders to take action in advance of and in response to earthquakes. Examples include the National Seismic Hazard Model (NSHM), development of tools for improved situational awareness such as earthquake early warning (EEW) and operational earthquake forecasting (OEF), research about induced seismicity, and new efforts to advance comprehensive subduction zone science and monitoring. Geodetic observations provide unique and complementary information directly relevant to advancing many aspects of these efforts (fig. 1). EHP scientists have long leveraged geodetic data for a range of influential studies, and they continue to develop innovative observation and analysis methods that push the boundaries of the field of geodesy as applied to natural hazards research. Given the ongoing, rapid improvement in availability, variety, and precision of geodetic measurements, considering ways to fully utilize this observational resource for earthquake loss reduction is timely and essential. This report presents strategies, and the underlying scientific rationale, by which the EHP could achieve the following outcomes: The EHP is an authoritative source for the interpretation of geodetic data and its use for earthquake loss reduction throughout the United States and its territories.The USGS consistently provides timely, high quality geodetic data to stakeholders.Significant earthquakes are better characterized by incorporating geodetic data into USGS

  4. Impact of Near-Field, Deep-Ocean Tsunami Observations on Forecasting the 7 December 2012 Japanese Tsunami

    NASA Astrophysics Data System (ADS)

    Bernard, Eddie; Wei, Yong; Tang, Liujuan; Titov, Vasily

    2014-12-01

    Following the devastating 11 March 2011 tsunami, two deep-ocean assessment and reporting of tsunamis (DART®)(DART® and the DART® logo are registered trademarks of the National Oceanic and Atmospheric Administration, used with permission) stations were deployed in Japanese waters by the Japanese Meteorological Agency. Two weeks after deployment, on 7 December 2012, a M w 7.3 earthquake off Japan's Pacific coastline generated a tsunami. The tsunami was recorded at the two Japanese DARTs as early as 11 min after the earthquake origin time, which set a record as the fastest tsunami detecting time at a DART station. These data, along with those recorded at other DARTs, were used to derive a tsunami source using the National Oceanic and Atmospheric Administration tsunami forecast system. The results of our analysis show that data provided by the two near-field Japanese DARTs can not only improve the forecast speed but also the forecast accuracy at the Japanese tide gauge stations. This study provides important guidelines for early detection and forecasting of local tsunamis.

  5. Simulation of Earthquake-Generated Sea-Surface Deformation

    NASA Astrophysics Data System (ADS)

    Vogl, Chris; Leveque, Randy

    2016-11-01

    Earthquake-generated tsunamis can carry with them a powerful, destructive force. One of the most well-known, recent examples is the tsunami generated by the Tohoku earthquake, which was responsible for the nuclear disaster in Fukushima. Tsunami simulation and forecasting, a necessary element of emergency procedure planning and execution, is typically done using the shallow-water equations. A typical initial condition is that using the Okada solution for a homogeneous, elastic half-space. This work focuses on simulating earthquake-generated sea-surface deformations that are more true to the physics of the materials involved. In particular, a water layer is added on top of the half-space that models the seabed. Sea-surface deformations are then simulated using the Clawpack hyperbolic PDE package. Results from considering the water layer both as linearly elastic and as "nearly incompressible" are compared to that of the Okada solution.

  6. The Parkfield earthquake prediction of October 1992; the emergency services response

    USGS Publications Warehouse

    Andrews, R.

    1992-01-01

    The science of earthquake prediction is interesting and worthy of support. In many respects the ultimate payoff of earthquake prediction or earthquake forecasting is how the information can be used to enhance public safety and public preparedness. This is a particularly important issue here in California where we have such a high level of seismic risk historically, and currently, as a consequence of activity in 1989 in the San Francisco Bay Area, in Humboldt County in April of this year (1992), and in southern California in the Landers-Big Bear area in late June of this year (1992). We are currently very concerned about the possibility of a major earthquake, one or more, happening close to one of our metropolitan areas. Within that context, the Parkfield experiment becomes very important. 

  7. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  8. Forecasting database for the tsunami warning regional center for the western Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Gailler, A.; Hebert, H.; Loevenbruck, A.; Hernandez, B.

    2010-12-01

    pre-computed unit scenarios. The whole notion of a pre-computed forecasting database also requires a historical earthquake and tsunami database, as well as an up-to-date seismotectonic database including faults geometry and a zonation based on seismotectonic synthesis of source zones and tsunamigenic faults. Our forecast strategy is thus based on a unit source function methodology, whereby the model runs are combined and scaled linearly to produce any composite tsunamis propagation solution. Each unit source function is equivalent to a tsunami generated by a Mo 1.75E+19 N.m earthquake (Mw ~6.8) with a rectangular fault 25 km by 20 km in size and 1 m in slip. The faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults bounding the western Mediterranean basin. The number of unit functions involved varies with the magnitude of the wanted composite solution and the combined waveheights are multiplied by a given scaling factor to produce the new arbitrary scenario. Some test-cases examples are presented (e.g., Boumerdès 2003 [Algeria, Mw 6.8], Djijel 1856 [Algeria, Mw 7.2], Ligure 1887 [Italia, Mw 6.5-6.7]).

  9. QuakeCaster, an earthquake physics demonstration and exploration tool

    USGS Publications Warehouse

    Linton, K.; Stein, R.S.

    2012-01-01

    A fundamental riddle of earthquake occurrence is that tectonic motions at plate interiors are steady, changing only subtly over millions of years, but at plate boundary faults, the plates are stuck for hundreds of years and then suddenly jerk forward in earthquakes. Why does this happen? The answer, as formulated by Harry F. Reid (Reid 1910, 192) is that the Earth’s crust is elastic, behaving like a very stiff slab of rubber sliding over a substrate of “honey”-like asthenosphere, and that faults are restrained by friction. The crust near the faults—zones of weakness that separate the plates—slowly deforms, building up stress until frictional resistance on the fault is overcome and the fault suddenly slips. For the past century, scientists have sought ways to use this knowledge to forecast earthquakes.

  10. Development of Physics and Control of Multiple Forcing Mechanisms for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Whitmore, P.; Macpherson, K. A.; Knight, W. R.

    2016-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes or other mechanisms in either the Pacific Ocean, Atlantic Ocean or Gulf of Mexico. At the U.S. National Tsunami Warning Center (NTWC), the use of the model has been mainly for tsunami pre-computation due to earthquakes. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. The model has also been used for tsunami hindcasting due to submarine landslides and due to atmospheric pressure jumps, but in a very case-specific and somewhat limited manner. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves approach coastal waters. The shallow-water wave physics is readily applicable to all of the above tsunamis as well as to tides. Recently, the model has been expanded to include multiple forcing mechanisms in a systematic fashion, and to enhance the model physics for non-earthquake events.ATFM is now able to handle multiple source mechanisms, either individually or jointly, which include earthquake, submarine landslide, meteo-tsunami and tidal forcing. As for earthquakes, the source can be a single unit source or multiple, interacting source blocks. Horizontal slip contribution can be added to the sea-floor displacement. The model now includes submarine landslide physics, modeling the source either as a rigid slump, or as a viscous fluid. Additional shallow-water physics have been implemented for the viscous submarine landslides. With rigid slumping, any trajectory can be followed. As for meteo-tsunami, the forcing mechanism is capable of following any trajectory shape. Wind stress physics has also been implemented for the meteo-tsunami case, if required. As an example of multiple

  11. Measuring and forecasting great tsunamis by GNSS-based vertical positioning of multiple ships

    NASA Astrophysics Data System (ADS)

    Inazu, D.; Waseda, T.; Hibiya, T.; Ohta, Y.

    2016-12-01

    Vertical ship positioning by the Global Navigation Satellite System (GNSS) was investigated for measuring and forecasting great tsunamis. We first examined existing GNSS vertical position data of a navigating vessel. The result indicated that by using the kinematic Precise Point Positioning (PPP) method, tsunamis greater than 10^-1 m can be detected from the vertical position of the ship. Based on Automatic Identification System (AIS) data, tens of cargo ships and tankers are regularly identified navigating over the Nankai Trough, southwest of Japan. We then assumed that a future Nankai Trough great earthquake tsunami will be observed by ships at locations based on AIS data. The tsunami forecast capability by these virtual offshore tsunami measurements was examined. A conventional Green's function based inversion was used to determine the initial tsunami height distribution. Tsunami forecast tests over the Nankai Trough were carried out using simulated tsunami data of the vertical positions of multiple cargo ships/tankers on a certain day, and of the currently operating observations by deep-sea pressure gauges and Global Positioning System (GPS) buoys. The forecast capability of ship-based tsunami height measurements alone was shown to be comparable to or better than that using the existing offshore observations.

  12. Amplitude of foreshocks as a possible seismic precursor to earthquakes

    USGS Publications Warehouse

    Lindh, A.G.

    1978-01-01

    In recent years, we have made significant progress in being able to recognize the long-range pattern of events that precede large earthquakes. For example, in a recent issue of the Earthquake Information Bulletin, we saw how the pioneering work of S.A. Fedotov of the U.S.S.R in the Kamchatka-Kurile Islands region has been applied worldwide to forecast where large, shallow earthquakes might occur in the next decades. Indeed, such a "seismic gap" off the coast of Alaska was filled by the 1972 Sitka earthquake. Promising results are slowly accumulating from other techniques that suggest that intermediate-term precursors might also be seen: among these are tilt and geomagnetic anomalies and anomalous land uplift. But the crucial point remains that short-term precursors (days to hours) will be needed in many cases if there is to be a significant saving of lives. 

  13. Tsunami simulation method initiated from waveforms observed by ocean bottom pressure sensors for real-time tsunami forecast; Applied for 2011 Tohoku Tsunami

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro

    2017-04-01

    After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami

  14. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    PubMed

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  15. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  16. The Earthquake Information Test: Validating an Instrument for Determining Student Misconceptions.

    ERIC Educational Resources Information Center

    Ross, Katharyn E. K.; Shuell, Thomas J.

    Some pre-instructional misconceptions held by children can persist through scientific instruction and resist changes. Identifying these misconceptions would be beneficial for science instruction. In this preliminary study, scores on a 60-item true-false test of knowledge and misconceptions about earthquakes were compared with previous interview…

  17. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of

  18. An interdisciplinary approach to study Pre-Earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  19. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  20. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  1. Forecast Verification: Identification of small changes in weather forecasting skill

    NASA Astrophysics Data System (ADS)

    Weatherhead, E. C.; Jensen, T. L.

    2017-12-01

    Global and regonal weather forecasts have improved over the past seven decades most often because of small, incrmental improvements. The identificaiton and verification of forecast improvement due to proposed small changes in forecasting can be expensive and, if not carried out efficiently, can slow progress in forecasting development. This presentation will look at the skill of commonly used verification techniques and show how the ability to detect improvements can depend on the magnitude of the improvement, the number of runs used to test the improvement, the location on the Earth and the statistical techniques used. For continuous variables, such as temperture, wind and humidity, the skill of a forecast can be directly compared using a pair-wise statistical test that accommodates the natural autocorrelation and magnitude of variability. For discrete variables, such as tornado outbreaks, or icing events, the challenges is to reduce the false alarm rate while improving the rate of correctly identifying th discrete event. For both continuus and discrete verification results, proper statistical approaches can reduce the number of runs needed to identify a small improvement in forecasting skill. Verification within the Next Generation Global Prediction System is an important component to the many small decisions needed to make stat-of-the-art improvements to weather forecasting capabilities. The comparison of multiple skill scores with often conflicting results requires not only appropriate testing, but also scientific judgment to assure that the choices are appropriate not only for improvements in today's forecasting capabilities, but allow improvements that will come in the future.

  2. Three ingredients for Improved global aftershock forecasts: Tectonic region, time-dependent catalog incompleteness, and inter-sequence variability

    USGS Publications Warehouse

    Page, Morgan T.; Van Der Elst, Nicholas; Hardebeck, Jeanne L.; Felzer, Karen; Michael, Andrew J.

    2016-01-01

    Following a large earthquake, seismic hazard can be orders of magnitude higher than the long‐term average as a result of aftershock triggering. Because of this heightened hazard, emergency managers and the public demand rapid, authoritative, and reliable aftershock forecasts. In the past, U.S. Geological Survey (USGS) aftershock forecasts following large global earthquakes have been released on an ad hoc basis with inconsistent methods, and in some cases aftershock parameters adapted from California. To remedy this, the USGS is currently developing an automated aftershock product based on the Reasenberg and Jones (1989) method that will generate more accurate forecasts. To better capture spatial variations in aftershock productivity and decay, we estimate regional aftershock parameters for sequences within the García et al. (2012) tectonic regions. We find that regional variations for mean aftershock productivity reach almost a factor of 10. We also develop a method to account for the time‐dependent magnitude of completeness following large events in the catalog. In addition to estimating average sequence parameters within regions, we develop an inverse method to estimate the intersequence parameter variability. This allows for a more complete quantification of the forecast uncertainties and Bayesian updating of the forecast as sequence‐specific information becomes available.

  3. The Geological Susceptibility of Induced Earthquakes in the Duvernay Play

    NASA Astrophysics Data System (ADS)

    Pawley, Steven; Schultz, Ryan; Playter, Tiffany; Corlett, Hilary; Shipman, Todd; Lyster, Steven; Hauck, Tyler

    2018-02-01

    Presently, consensus on the incorporation of induced earthquakes into seismic hazard has yet to be established. For example, the nonstationary, spatiotemporal nature of induced earthquakes is not well understood. Specific to the Western Canada Sedimentary Basin, geological bias in seismogenic activation potential has been suggested to control the spatial distribution of induced earthquakes regionally. In this paper, we train a machine learning algorithm to systemically evaluate tectonic, geomechanical, and hydrological proxies suspected to control induced seismicity. Feature importance suggests that proximity to basement, in situ stress, proximity to fossil reef margins, lithium concentration, and rate of natural seismicity are among the strongest model predictors. Our derived seismogenic potential map faithfully reproduces the current distribution of induced seismicity and is suggestive of other regions which may be prone to induced earthquakes. The refinement of induced seismicity geological susceptibility may become an important technique to identify significant underlying geological features and address induced seismic hazard forecasting issues.

  4. Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be

  5. A Cooperative Test of the Load/Unload Response Ratio Proposed Method of Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Trotta, J. E.; Tullis, T. E.

    2004-12-01

    The Load/Unload Response Ratio (LURR) method is a proposed technique to predict earthquakes that was first put forward by Yin in 1984 (Yin, 1987). LURR is based on the idea that when a region is near failure, there is an increase in the rate of seismic activity during loading of the tidal cycle relative to the rate of seismic activity during unloading of the tidal cycle. Typically the numerator of the LURR ratio is the number, or the sum of some measure of the size (e.g. Benioff strain), of small earthquakes that occur during loading of the tidal cycle, whereas the denominator is the same as the numerator except it is calculated during unloading. LURR method suggests this ratio should increase in the months to year preceding a large earthquake. Regions near failure have tectonic stresses nearly high enough for a large earthquake to occur, thus it seems more likely that smaller earthquakes in the region would be triggered when the tidal stresses add to the tectonic ones. However, until recently even the most careful studies suggested that the effect of tidal stresses on earthquake occurrence is very small and difficult to detect. New studies have shown that there is a tidal triggering effect on shallow thrust faults in areas with strong tides from ocean loading (Tanaka et al., 2002; Cochran et al., 2004). We have been conducting an independent test of the LURR method, since there would be important scientific and social implications if the LURR method were proven to be a robust method of earthquake prediction. Smith and Sammis (2003) also undertook a similar study. Following both the parameters of Yin et al. (2000) and the somewhat different ones of Smith and Sammis (2003), we have repeated calculations of LURR for the Northridge and Loma Prieta earthquakes in California. Though we have followed both sets of parameters closely, we have been unable to reproduce either set of results. A general agreement was made at the recent ACES Workshop in China between research

  6. Monitoring the ionosphere during the earthquake on GPS data

    NASA Astrophysics Data System (ADS)

    Smirnov, V. M.; Smirnova, E. V.

    The problem of stability estimation of physical state of an atmosphere attracts a rapt attention of the world community but it is still far from being solved A lot of global atmospheric processes which have direct influence upon all forms of the earth life have been detected The comprehension of cause effect relations stipulating their origin and development is possible only on the basis of long-term sequences of observations data of time-space variations of the atmosphere characteristics which should be received on a global scale and in the interval of altitudes as brand as possible Such data can be obtained only with application satellite systems The latest researches have shown that the satellite systems can be successfully used for global and continuous monitoring ionosphere of the Earth In turn the ionosphere can serve a reliable indicator of different kinds of effects on an environment both of natural and anthropogenic origin Nowadays the problem of the short-term forecast of earthquakes has achieved a new level of understanding There have been revealed indisputable factors which show that the ionosphere anomalies observed during the preparation of seismic events contain the information allowing to detect and to interpret them as earthquake precursors The partial decision of the forecast problem of earthquakes on ionospheric variations requires the processing data received simultaneously from extensive territories Such requirements can be met only on the basis of ground-space system of ionosphere monitoring The navigating systems

  7. Development and Testing of Operational Dual-Polarimetric Radar Based Lightning Initiation Forecast Techniques

    NASA Technical Reports Server (NTRS)

    Woodard, Crystal; Carey, Lawrence D.; Petersen, Walter A.; Felix, Mariana; Roeder, William P.

    2011-01-01

    Lightning is one of Earth s natural dangers, destructive not only to life but also physical property. According to the National Weather Service, there are on average 58 lightning fatalities each year, with over 300 related injuries (NWS 2010). The ability to forecast lightning is critical to a host of activities ranging from space vehicle launch operations to recreational sporting events. For example a single lightning strike to a Space Shuttle could cause billions of dollars of damage and possible loss of life. While forecasting that provides longer lead times could provide sporting officials with more time to respond to possible threatening weather events, thus saving the lives of player and bystanders. Many researchers have developed and tested different methods and tools of first flash forecasting, however few have done so using dual-polarimetric radar variables and products on an operational basis. The purpose of this study is to improve algorithms for the short-term prediction of lightning initiation through development and testing of operational techniques that rely on parameters observed and diagnosed using C-band dual-polarimetric radar.

  8. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  9. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    NASA Astrophysics Data System (ADS)

    Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.

    2011-01-01

    Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  10. End-User Applications of Real-Time Earthquake Information in Europe

    NASA Astrophysics Data System (ADS)

    Cua, G. B.; Gasparini, P.; Giardini, D.; Zschau, J.; Filangieri, A. R.; Reakt Wp7 Team

    2011-12-01

    The primary objective of European FP7 project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction) is to improve the efficiency of real-time earthquake risk mitigation methods and their capability of protecting structures, infrastructures, and populations. REAKT aims to address the issues of real-time earthquake hazard and response from end-to-end, with efforts directed along the full spectrum of methodology development in earthquake forecasting, earthquake early warning, and real-time vulnerability systems, through optimal decision-making, and engagement and cooperation of scientists and end users for the establishment of best practices for use of real-time information. Twelve strategic test cases/end users throughout Europe have been selected. This diverse group of applications/end users includes civil protection authorities, railway systems, hospitals, schools, industrial complexes, nuclear plants, lifeline systems, national seismic networks, and critical structures. The scale of target applications covers a wide range, from two school complexes in Naples, to individual critical structures, such as the Rion Antirion bridge in Patras, and the Fatih Sultan Mehmet bridge in Istanbul, to large complexes, such as the SINES industrial complex in Portugal and the Thessaloniki port area, to distributed lifeline and transportation networks and nuclear plants. Some end-users are interested in in-depth feasibility studies for use of real-time information and development of rapid response plans, while others intend to install real-time instrumentation and develop customized automated control systems. From the onset, REAKT scientists and end-users will work together on concept development and initial implementation efforts using the data products and decision-making methodologies developed with the goal of improving end-user risk mitigation. The aim of this scientific/end-user partnership is to ensure that scientific efforts are applicable to operational

  11. Adapting ElarmS Earthquake Early Warnings for Cascadia: Development and Testing of ShakeAlerts in the Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Hartog, J. R.; Kress, V. C.; Thomas, T.; Malone, S. D.; Henson, I. H.; Neuhauser, D. S.

    2013-12-01

    As a first step in establishing an earthquake early warning system in Cascadia, we have installed the ElarmS component of the ShakeAlert system at the Pacific Northwest Seismic Network. In Cascadia our initial focus is primarily on the development of a seismo-geodetic-based real-time finite fault rupture algorithm to detect and characterize a large plate-boundary rupture in progress (see Crowell et. al., this session). In this regard the goal of the purely seismic-data-based ElarmS implementation is to 'trigger' the finite fault rupture algorithm. At the same time, however, the Cascadian ElarmS will also produce warnings for smaller onshore crustal earthquakes. While warnings from these smaller and closer earthquakes will provide shorter warning times for communities, and for less dramatic earthquakes, we intend to use them for educational purposes, and to coordinate with our regional and collaborating partners. They will also help to guide us to shorten data latencies and learn where additional instrumentation is most needed to increase performance. The accuracy of ElarmS in Cascadia is another major concern, because the current ElarmS model presumes an initial focal depth for earthquakes of 8 km based on California experience, while in Cascadia earthquakes of major concern may be as deep as 50 km, and/or occur beyond the western fringe of the seismic network. To this purpose our testing protocol is aimed at determining what changes are required to ensure top performance of an ElarmS-based warning system in Cascadia. Because of Cascadia's relatively low seismicity rate, and the paucity of data from plate boundary earthquakes there of any size, we have prioritized the development of a test system. The test system permits us to: 1) replay segments of actual seismic waveform data recorded from the PNSN and contributing seismic network stations to represent both earthquakes and noise conditions, and 2) broadcast synthetic data into the system to simulate signals we

  12. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  13. Role of the Internet in Anticipating and Mitigating Earthquake Catastrophes, and the Emergence of Personal Risk Management (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Donnellan, A.; Graves, W.; Tiampo, K. F.; Klein, W.

    2009-12-01

    Risks from natural and financial catastrophes are currently managed by a combination of large public and private institutions. Public institutions usually are comprised of government agencies that conduct studies, formulate policies and guidelines, enforce regulations, and make “official” forecasts. Private institutions include insurance and reinsurance companies, and financial service companies that underwrite catastrophe (“cat”) bonds, and make private forecasts. Although decisions about allocating resources and developing solutions are made by large institutions, the costs of dealing with catastrophes generally fall for the most part on businesses and the general public. Information on potential risks is generally available to the public for some hazards but not others. For example, in the case of weather, private forecast services are provided by www.weather.com and www.wunderground.com. For earthquakes in California (only), the official forecast is the WGCEP-USGS forecast, but provided in a format that is difficult for the public to use. Other privately made forecasts are currently available, for example by the JPL QuakeSim and Russian groups, but these efforts are limited. As more of the world’s population moves increasingly into major seismic zones, new strategies are needed to allow individuals to manage their personal risk from large and damaging earthquakes. Examples include individual mitigation measures such as retrofitting, as well as microinsurance in both developing and developed countries, as well as other financial strategies. We argue that the “long tail” of the internet offers an ideal, and greatly underutilized mechanism to reach out to consumers and to provide them with the information and tools they need to confront and manage seismic hazard and risk on an individual, personalized basis. Information of this type includes not only global hazard forecasts, which are now possible, but also global risk estimation. Additionally

  14. The Hayward-Rodgers Creek Fault System: Learning from the Past to Forecast the Future

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Lienkaemper, J. J.; Hecker, S.

    2007-12-01

    Creek-northern Hayward fault earthquake, or a rupture of all three fault sections. Each of these rupture combinations would produce a magnitude larger than 1868 (M~6.9). In 2003, the Working Group on California Earthquake Probabilities released a new earthquake forecast for the Bay Area. Using the earthquake timing data and alternative fault rupture models, the Working Group estimated a 27 percent likelihood of a M?6.7 earthquake along the Hayward-Rodgers Creek fault zone by the year 2031. This is this highest probability of any individual fault system in the Bay Area. New paleoseismic data will allow updating of this forecast.

  15. Crowdsourced earthquake early warning.

    PubMed

    Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L

    2015-04-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  16. Crowdsourced earthquake early warning

    PubMed Central

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167

  17. Crowdsourced earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  18. Physics-Based and Statistical Forecasting in Slowly Stressed Environments

    NASA Astrophysics Data System (ADS)

    Segou, M.; Deschamps, A.

    2013-12-01

    We perform a retrospective forecasting experiment between 1995-2012, comparing the predictive power of physics-based and statistical models in Corinth Gulf (Central Greece), which is the fastest continental rift in the world with extension rates 11-15 mm/yr, but also at least three times lower than the motion accommodated by the San Andreas Fault System (~40 mm/yr). The seismicity of the western Corinth gulf has been characterized by significant historical events (1817 M6.6, 1861 M6.7, 1889 M7.0) whereas the modern instrumental catalog (post-1964) reveals one major event, the 1995 Aigio M6.4 (15/06/1995) together with several periods of increased microseismic activity, usually lasting few months and possibly related with fluid diffusion. We examine six predictive models, three based on the combination of Coulomb stress changes and rate-and-state theory (CRS), two epidemic type aftershock sequence (ETAS) models and one hybrid CRS-ETAS (h-ETAS) model. We investigate whether the above forecast models can adequately describe the episodic swarm activity within the gulf. Even though Corinth gulf has been studied extensively in the past there is still today a debate whether earthquake activity is related with the existence of either a shallow dipping structure or steeply dipping normal faults. In the light of the above statement, two CRS realization are based on resolving Coulomb stress changes on specified receiver faults, expressing the aforementioned structural models, whereas the third CRS model uses optimally-oriented for failure planes. In our CRS implementation we account for stress changes following all major ruptures within our testing phase with M greater than 4.5. We also estimate fault constitutive parameters from modeling the response to major earthquakes at the vicinity of the gulf (Ασ=0.2, stressing rate 0.02 bar/yr). The ETAS parameters are taken as the maximum likelihood estimates derived from stochastic declustering of the modern seismicity catalog

  19. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  20. Testing an innovative framework for flood forecasting, monitoring and mapping in Europe

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Kalas, Milan; Lorini, Valerio; Wania, Annett; Pappenberger, Florian; Salamon, Peter; Ramos, Maria Helena; Cloke, Hannah; Castillo, Carlos

    2017-04-01

    Between May and June 2016, France was hit by severe floods, particularly in the Loire and Seine river basins. In this work, we use this case study to test an innovative framework for flood forecasting, mapping and monitoring. More in detail, the system integrates in real-time two components of the Copernicus Emergency mapping services, namely the European Flood Awareness System and the satellite-based Rapid Mapping, with new procedures for rapid risk assessment and social media and news monitoring. We explore in detail the performance of each component of the system, demonstrating the improvements in respect to stand-alone flood forecasting and monitoring systems. We show how the performances of the forecasting component can be refined using the real-time feedback from social media monitoring to identify which areas were flooded, to evaluate the flood intensity, and therefore to correct impact estimations. Moreover, we show how the integration with impact forecast and social media monitoring can improve the timeliness and efficiency of satellite based emergency mapping, and reduce the chances of missing areas where flooding is already happening. These results illustrate how the new integrated approach leads to a better and earlier decision making and a timely evaluation of impacts.

  1. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  2. Investigation of Pre-Earthquake Ionospheric Disturbances by 3D Tomographic Analysis

    NASA Astrophysics Data System (ADS)

    Yagmur, M.

    2016-12-01

    Ionospheric variations before earthquakes have been widely discussed phenomena in ionospheric studies. To clarify the source and mechanism of these phenomena is highly important for earthquake forecasting. To well understanding the mechanical and physical processes of pre-seismic Ionospheric anomalies that might be related even with Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling, both statistical and 3D modeling analysis are needed. For these purpose, firstly we have investigated the relation between Ionospheric TEC Anomalies and potential source mechanisms such as space weather activity and lithospheric phenomena like positive surface electric charges. To distinguish their effects on Ionospheric TEC, we have focused on pre-seismically active days. Then, we analyzed the statistical data of 54 earthquakes that M≽6 between 2000 and 2013 as well as the 2011 Tohoku and the 2016 Kumamoto Earthquakes in Japan. By comparing TEC anomaly and Solar activity by Dst Index, we have found that 28 events that might be related with Earthquake activity. Following the statistical analysis, we also investigate the Lithospheric effect on TEC change on selected days. Among those days, we have chosen two case studies as the 2011 Tohoku and the 2016 Kumamoto Earthquakes to make 3D reconstructed images by utilizing 3D Tomography technique with Neural Networks. The results will be presented in our presentation. Keywords : Earthquake, 3D Ionospheric Tomography, Positive and Negative Anomaly, Geomagnetic Storm, Lithosphere

  3. Earthquake Predictability: Results From Aggregating Seismicity Data And Assessment Of Theoretical Individual Cases Via Synthetic Data

    NASA Astrophysics Data System (ADS)

    Adamaki, A.; Roberts, R.

    2016-12-01

    For many years an important aim in seismological studies has been forecasting the occurrence of large earthquakes. Despite some well-established statistical behavior of earthquake sequences, expressed by e.g. the Omori law for aftershock sequences and the Gutenburg-Richter distribution of event magnitudes, purely statistical approaches to short-term earthquake prediction have in general not been successful. It seems that better understanding of the processes leading to critical stress build-up prior to larger events is necessary to identify useful precursory activity, if this exists, and statistical analyses are an important tool in this context. There has been considerable debate on the usefulness or otherwise of foreshock studies for short-term earthquake prediction. We investigate generic patterns of foreshock activity using aggregated data and by studying not only strong but also moderate magnitude events. Aggregating empirical local seismicity time series prior to larger events observed in and around Greece reveals a statistically significant increasing rate of seismicity over 20 days prior to M>3.5 earthquakes. This increase cannot be explained by tempo-spatial clustering models such as ETAS, implying genuine changes in the mechanical situation just prior to larger events and thus the possible existence of useful precursory information. Because of tempo-spatial clustering, including aftershocks to foreshocks, even if such generic behavior exists it does not necessarily follow that foreshocks have the potential to provide useful precursory information for individual larger events. Using synthetic catalogs produced based on different clustering models and different presumed system sensitivities we are now investigating to what extent the apparently established generic foreshock rate acceleration may or may not imply that the foreshocks have potential in the context of routine forecasting of larger events. Preliminary results suggest that this is the case, but

  4. Insights into earthquake hazard map performance from shaking history simulations

    NASA Astrophysics Data System (ADS)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher

  5. The Great Tumaco, Colombia earthquake of 12 December 1979

    USGS Publications Warehouse

    Herd, D.G.; Youd, T.L.; Meyer, H.; Arango, C.J.L.; Person, W.J.; Mendoza, C.

    1981-01-01

    Southwestern Colombia and northern Ecuador were shaken by a shallow-focus earthquake on 12 December 1979. The magnitude 8 shock, located near Tumaco, Colombia, was the largest in northwestern South America since 1942 and had been forecast to fill a seismic gap. Thrust faulting occurred on a 280- by 130-kilometer rectangular patch of a subduction zone that dips east beneath the Pacific coast of Colombia. A 200-kilometer stretch of the coast tectonically subsided as much as 1.6 meters; uplift occurred offshore on the continental slope. A tsunami swept inland immediately after the earthquake. Ground shaking (intensity VI to IX) caused many buildings to collapse and generated liquefaction in sand fills and in Holocene beach, lagoonal, and fluvial deposits.

  6. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of Earthquake Predictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.

  7. Using strain rates to forecast seismic hazards

    USGS Publications Warehouse

    Evans, Eileen

    2017-01-01

    One essential component in forecasting seismic hazards is observing the gradual accumulation of tectonic strain accumulation along faults before this strain is suddenly released as earthquakes. Typically, seismic hazard models are based on geologic estimates of slip rates along faults and historical records of seismic activity, neither of which records actively accumulating strain. But this strain can be estimated by geodesy: the precise measurement of tiny position changes of Earth’s surface, obtained from GPS, interferometric synthetic aperture radar (InSAR), or a variety of other instruments.

  8. Real-time determination of the worst tsunami scenario based on Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya

    2016-04-01

    In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length

  9. Thermal IR satellite data application for earthquake research in Pakistan

    NASA Astrophysics Data System (ADS)

    Barkat, Adnan; Ali, Aamir; Rehman, Khaista; Awais, Muhammad; Riaz, Muhammad Shahid; Iqbal, Talat

    2018-05-01

    The scientific progress in space research indicates earthquake-related processes of surface temperature growth, gas/aerosol exhalation and electromagnetic disturbances in the ionosphere prior to seismic activity. Among them surface temperature growth calculated using the satellite thermal infrared images carries valuable earthquake precursory information for near/distant earthquakes. Previous studies have concluded that such information can appear few days before the occurrence of an earthquake. The objective of this study is to use MODIS thermal imagery data for precursory analysis of Kashmir (Oct 8, 2005; Mw 7.6; 26 km), Ziarat (Oct 28, 2008; Mw 6.4; 13 km) and Dalbandin (Jan 18, 2011; Mw 7.2; 69 km) earthquakes. Our results suggest that there exists an evident correlation of Land Surface Temperature (thermal; LST) anomalies with seismic activity. In particular, a rise of 3-10 °C in LST is observed 6, 4 and 14 days prior to Kashmir, Ziarat and Dalbandin earthquakes. In order to further elaborate our findings, we have presented a comparative and percentile analysis of daily and five years averaged LST for a selected time window with respect to the month of earthquake occurrence. Our comparative analyses of daily and five years averaged LST show a significant change of 6.5-7.9 °C for Kashmir, 8.0-8.1 °C for Ziarat and 2.7-5.4 °C for Dalbandin earthquakes. This significant change has high percentile values for the selected events i.e. 70-100% for Kashmir, 87-100% for Ziarat and 84-100% for Dalbandin earthquakes. We expect that such consistent results may help in devising an optimal earthquake forecasting strategy and to mitigate the effect of associated seismic hazards.

  10. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  11. Impact of earthquakes on sex ratio at birth: Eastern Marmara earthquakes

    PubMed Central

    Doğer, Emek; Çakıroğlu, Yiğit; Köpük, Şule Yıldırım; Ceylan, Yasin; Şimşek, Hayal Uzelli; Çalışkan, Eray

    2013-01-01

    Objective: Previous reports suggest that maternal exposure to acute stress related to earthquakes affects the sex ratio at birth. Our aim was to examine the change in sex ratio at birth after Eastern Marmara earthquake disasters. Material and Methods: This study was performed using the official birth statistics from January 1997 to December 2002 – before and after 17 August 1999, the date of the Golcuk Earthquake – supplied from the Turkey Statistics Institute. The secondary sex ratio was expressed as the male proportion at birth, and the ratio of both affected and unaffected areas were calculated and compared on a monthly basis using data from gender with using the Chi-square test. Results: We observed significant decreases in the secondary sex ratio in the 4th and 8th months following an earthquake in the affected region compared to the unaffected region (p= 0.001 and p= 0.024). In the earthquake region, the decrease observed in the secondary sex ratio during the 8th month after an earthquake was specific to the period after the earthquake. Conclusion: Our study indicated a significant reduction in the secondary sex ratio after an earthquake. With these findings, events that cause sudden intense stress such as earthquakes can have an effect on the sex ratio at birth. PMID:24592082

  12. Complexity in Size, Recurrence and Source of Historical Earthquakes and Tsunamis in Central Chile

    NASA Astrophysics Data System (ADS)

    Cisternas, M.

    2013-05-01

    Central Chile has a 470-year-long written earthquake history, the longest of any part of the country. Thanks to the early and continuous Spanish settlement of this part of Chile (32°- 35° S), records document destructive earthquakes and tsunamis in 1575, 1647, 1730, 1822, 1906 and 1985. This sequence has promoted the idea that central Chile's large subduction inter-plate earthquakes recur at regular intervals of about 80 years. The last of these earthquakes, in 1985, was even forecast as filling a seismic gap on the thrust boundary between the subducting Nazca Plate and the overriding South America Plate. Following this logic, the next large earthquake in metropolitan Chile will not occur until late in the 21st century. However, here I challenge this conclusion by reporting recently discovered historical evidence in Spain, Japan, Peru, and Chile. This new evidence augments the historical catalog in central Chile, strongly suggests that one of these earthquakes previously assumed to occur on the inter-plate interface in fact occurred elsewhere, and forces the conclusion that another of these earthquakes (and its accompanying tsunami) dwarfed the others. These findings complicate the task of assessing the hazard of future earthquakes in Chile's most populated region.

  13. Improvement of Earthquake Epicentral Locations Using T-Phases: Testing by Comparison With Surface Wave Relative Event Locations

    DTIC Science & Technology

    2001-10-01

    deployment of 51 ocean -bottom seismometers (OBS) on the seafloor spanning 800 km across the East Pacific Rise provides a unique opportunity to test the...aftershock sequence of earthquakes at the northern end of the Easter microplate . In addition, for the larger earthquakes, we can compare relative... ocean -bottom seismometers OBJECTIVES The objectives of this research are To explore the synergy between hydroacoustic and seismic techniques

  14. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model

  15. Reducing False Alarms of Annual Forecast in the Central China North-South Seismic Belt by Reverse Tracing of Precursors (RTP) Using the Pattern Informatics (PI) `Hotspots'

    NASA Astrophysics Data System (ADS)

    Zhang, Shengfeng; Wu, Zhongliang; Jiang, Changsheng

    2017-06-01

    The annual consultation on the likelihood of earthquakes in the next year, the `Annual Consultation Meeting', has been one of the most important forward forecast experiments organized by the China Earthquake Administration (CEA) since the 1970s, in which annual alarm regions are identified by an expert panel considering multi-disciplinary `anomalies'. In such annual forecasts, one of the problems in need of further technical solution is its false alarms. To tackle this problem, the concept of `reverse tracing of precursors (RTP)' is used to the annual consultation, as a temporal continuation and spatial extension of the work of Z hao et al. (Pure Appl Geophys 167:783-800, 2010). The central China north-south seismic belt (in connection to the CSEP testing region) is selected as the testing region of such an approach. Applying the concept of RTP, for an annual alarm region delineated by the Annual Consultation Meeting, the distribution of `hotspots' of the pattern informatics (PI), which targets the 5-year-scale seismic hazard, is considered. The `hit', or successful forecast, of the annual seismic hazard is shown to be related to the sufficient coverage of the `hotspots' within the annual alarm region. The ratio of the areas of the `hotspots' over the whole area of the annual alarm region is thus used to identify the false alarms which have few `hotspots'. The results of the years 2004-2012 show that using a threshold of 17 % can reduce 34 % (13 among 38) of the false alarms without losing the successful hit (being 6 in that period).

  16. Forecasting, Forecasting

    Treesearch

    Michael A. Fosberg

    1987-01-01

    Future improvements in the meteorological forecasts used in fire management will come from improvements in three areas: observational systems, forecast techniques, and postprocessing of forecasts and better integration of this information into the fire management process.

  17. Frog Swarms: Earthquake Precursors or False Alarms?

    PubMed Central

    Grant, Rachel A.; Conlan, Hilary

    2013-01-01

    Simple Summary Media reports linking unusual animal behaviour with earthquakes can potentially create false alarms and unnecessary anxiety among people that live in earthquake risk zones. Recently large frog swarms in China and elsewhere have been reported as earthquake precursors in the media. By examining international media reports of frog swarms since 1850 in comparison to earthquake data, it was concluded that frog swarms are naturally occurring dispersal behaviour of juveniles and are not associated with earthquakes. However, the media in seismic risk areas may be more likely to report frog swarms, and more likely to disseminate reports on frog swarms after earthquakes have occurred, leading to an apparent link between frog swarms and earthquakes. Abstract In short-term earthquake risk forecasting, the avoidance of false alarms is of utmost importance to preclude the possibility of unnecessary panic among populations in seismic hazard areas. Unusual animal behaviour prior to earthquakes has been reported for millennia but has rarely been scientifically documented. Recently large migrations or unusual behaviour of amphibians have been linked to large earthquakes, and media reports of large frog and toad migrations in areas of high seismic risk such as Greece and China have led to fears of a subsequent large earthquake. However, at certain times of year large migrations are part of the normal behavioural repertoire of amphibians. News reports of “frog swarms” from 1850 to the present day were examined for evidence that this behaviour is a precursor to large earthquakes. It was found that only two of 28 reported frog swarms preceded large earthquakes (Sichuan province, China in 2008 and 2010). All of the reported mass migrations of amphibians occurred in late spring, summer and autumn and appeared to relate to small juvenile anurans (frogs and toads). It was concluded that most reported “frog swarms” are actually normal behaviour, probably caused by

  18. Hydraulic fracturing volume is associated with induced earthquake productivity in the Duvernay play

    NASA Astrophysics Data System (ADS)

    Schultz, R.; Atkinson, G.; Eaton, D. W.; Gu, Y. J.; Kao, H.

    2018-01-01

    A sharp increase in the frequency of earthquakes near Fox Creek, Alberta, began in December 2013 in response to hydraulic fracturing. Using a hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We show that induced earthquakes are associated with completions that used larger injection volumes (104 to 105 cubic meters) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have an insignificant association with seismic response. Further findings suggest that geological factors play a prominent role in seismic productivity, as evidenced by spatial correlations. Together, volume and geological factors account for ~96% of the variability in the induced earthquake rate near Fox Creek. This result is quantified by a seismogenic index–modified frequency-magnitude distribution, providing a framework to forecast induced seismicity.

  19. Why local people did not present a problem in the 2016 Kumamoto earthquake, Japan though people accused in the 2009 L'Aquila earthquake?

    NASA Astrophysics Data System (ADS)

    Sugimoto, M.

    2016-12-01

    Risk communication is a big issues among seismologists after the 2009 L'Aquila earthquake all over the world. A lot of people remember 7 researchers as "L'Aquila 7" were accused in Italy. Seismologists said it is impossible to predict an earthquake by science technology today and join more outreach activities. "In a subsequent inquiry of the handling of the disaster, seven members of the Italian National Commission for the Forecast and Prevention of Major Risks were accused of giving "inexact, incomplete and contradictory" information about the danger of the tremors prior to the main quake. On 22 October 2012, six scientists and one ex-government official were convicted of multiple manslaughter for downplaying the likelihood of a major earthquake six days before it took place. They were each sentenced to six years' imprisonment (Wikipedia)". Finally 6 scientists are not guilty. The 2016 Kumamoto earthquake hit Kyushu, Japan in April. They are very similar seismological situations between the 2016 Kumamoto earthquake and the 2009 L'Aquila earthquake. The foreshock was Mj6.5 and Mw6.2 in 14 April 2016. The main shock was Mj7.3 and Mw7.0. Japan Metrological Agency (JMA) misleaded foreshock as mainshock before main shock occured. 41 people died by the main shock in Japan. However local people did not accused scientists in Japan. It has been less big earhquakes around 100 years in Kumamoto. Poeple was not so matured that they treated earthquake information in Kyushu, Japan. How are there differences between Japan and Italy? We learn about outreach activities for sciencits from this case.

  20. Noise Reduction of Ocean-Bottom Pressure Data Toward Real-Time Tsunami Forecasting

    NASA Astrophysics Data System (ADS)

    Tsushima, H.; Hino, R.

    2008-12-01

    We discuss a method of noise reduction of ocean-bottom pressure data to be fed into the near-field tsunami forecasting scheme proposed by Tsushima et al. [2008a]. In their scheme, the pressure data is processed in real time as follows: (1) removing ocean tide components by subtracting the sea-level variation computed from a theoretical tide model, (2) applying low-pass digital filter to remove high-frequency fluctuation due to seismic waves, and (3) removing DC-offset and linear-trend component to determine a baseline of relative sea level. However, it turns out this simple method is not always successful in extracting tsunami waveforms from the data, when the observed amplitude is ~1cm. For disaster mitigation, accurate forecasting of small tsunamis is important as well as large tsunamis. Since small tsunami events occur frequently, successful tsunami forecasting of those events are critical to obtain public reliance upon tsunami warnings. As a test case, we applied the data-processing described above to the bottom pressure records containing tsunami with amplitude less than 1 cm which was generated by the 2003 Off-Fukushima earthquake occurring in the Japan Trench subduction zone. The observed pressure variation due to the ocean tide is well explained by the calculated tide signals from NAO99Jb model [Matsumoto et al., 2000]. However, the tide components estimated by BAYTAP-G [Tamura et al., 1991] from the pressure data is more appropriate for predicting and removing the ocean tide signals. In the pressure data after removing the tide variations, there remain pressure fluctuations with frequencies ranging from about 0.1 to 1 mHz and with amplitudes around ~10 cm. These fluctuations distort the estimation of zero-level and linear trend to define relative sea-level variation, which is treated as tsunami waveform in the subsequent analysis. Since the linear trend is estimated from the data prior to the origin time of the earthquake, an artificial linear trend is

  1. The wireless networking system of Earthquake precursor mobile field observation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Teng, Y.; Wang, X.; Fan, X.; Wang, X.

    2012-12-01

    The mobile field observation network could be real-time, reliably record and transmit large amounts of data, strengthen the physical signal observations in specific regions and specific period, it can improve the monitoring capacity and abnormal tracking capability. According to the features of scatter everywhere, a large number of current earthquake precursor observation measuring points, networking technology is based on wireless broadband accessing McWILL system, the communication system of earthquake precursor mobile field observation would real-time, reliably transmit large amounts of data to the monitoring center from measuring points through the connection about equipment and wireless accessing system, broadband wireless access system and precursor mobile observation management center system, thereby implementing remote instrument monitoring and data transmition. At present, the earthquake precursor field mobile observation network technology has been applied to fluxgate magnetometer array geomagnetic observations of Tianzhu, Xichang,and Xinjiang, it can be real-time monitoring the working status of the observational instruments of large area laid after the last two or three years, large scale field operation. Therefore, it can get geomagnetic field data of the local refinement regions and provide high-quality observational data for impending earthquake tracking forecast. Although, wireless networking technology is very suitable for mobile field observation with the features of simple, flexible networking etc, it also has the phenomenon of packet loss etc when transmitting a large number of observational data due to the wireless relatively weak signal and narrow bandwidth. In view of high sampling rate instruments, this project uses data compression and effectively solves the problem of data transmission packet loss; Control commands, status data and observational data transmission use different priorities and means, which control the packet loss rate within

  2. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2013-04-01

    Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a

  3. Preliminary Report Summarizes Tsunami Impacts and Lessons Learned from the September 7, 2017, M8.1 Tehuantepec Earthquake

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Ramirez-Herrera, M. T.; Dengler, L. A.; Miller, K.; LaDuke, Y.

    2017-12-01

    The preliminary tsunami impacts from the September 7, 2017, M8.1 Tehuantepec Earthquake have been summarized in the following report: https://www.eeri.org/wp-content/uploads/EERI-Recon-Rpt-090717-Mexico-tsunami_fn.pdf. Although the tsunami impacts were not as significant as those from the earthquake itself (98 fatalities and 41,000 homes damaged), the following are highlights and lessons learned: The Tehuantepec earthquake was one of the largest down-slab normal faulting events ever recorded. This situation complicated the tsunami forecast since forecast methods and pre-event modeling are primarily associated with megathrust earthquakes where the most significant tsunamis are generated. Adding non-megathrust source modeling to the tsunami forecast databases of conventional warning systems should be considered. Offshore seismic and tsunami hazard analyses using past events should incorporate the potential for large earthquakes occurring along sources other than the megathrust boundary. From an engineering perspective, initial reports indicate there was only minor tsunami damage along the Mexico coast. There was damage to Marina Chiapas where floating docks overtopped their piles. Increasing pile heights could reduce the potential for damage to floating docks. Tsunami warning notifications did not get to the public in time to assist with evacuation. Streamlining the messaging in Mexico from the warning system directly to the public should be considered. And, for local events, preparedness efforts should place emphasis on responding to feeling the earthquake and not waiting to be notified. Although the U.S. tsunami warning centers were timely with their international and domestic messaging, there were some issues with how those messages were presented and interpreted. The use of a "Tsunami Threat" banner on the new main warning center website created confusion with emergency managers in the U.S. where no tsunami threat was expected to exist. Also, some U.S. states and

  4. Development of Real-time Tsunami Inundation Forecast Using Ocean Bottom Tsunami Networks along the Japan Trench

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Yamamoto, N.; Suzuki, W.; Hirata, K.; Nakamura, H.; Kunugi, T.; Kubo, T.; Maeda, T.

    2015-12-01

    In the 2011 Tohoku earthquake, in which huge tsunami claimed a great deal of lives, the initial tsunami forecast based on hypocenter information estimated using seismic data on land were greatly underestimated. From this lesson, NIED is now constructing S-net (Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench) which consists of 150 ocean bottom observatories with seismometers and pressure gauges (tsunamimeters) linked by fiber optic cables. To take full advantage of S-net, we develop a new methodology of real-time tsunami inundation forecast using ocean bottom observation data and construct a prototype system that implements the developed forecasting method for the Pacific coast of Chiba prefecture (Sotobo area). We employ a database-based approach because inundation is a strongly non-linear phenomenon and its calculation costs are rather heavy. We prepare tsunami scenario bank in advance, by constructing the possible tsunami sources, and calculating the tsunami waveforms at S-net stations, coastal tsunami heights and tsunami inundation on land. To calculate the inundation for target Sotobo area, we construct the 10-m-mesh precise elevation model with coastal structures. Based on the sensitivities analyses, we construct the tsunami scenario bank that efficiently covers possible tsunami scenarios affecting the Sotobo area. A real-time forecast is carried out by selecting several possible scenarios which can well explain real-time tsunami data observed at S-net from tsunami scenario bank. An advantage of our method is that tsunami inundations are estimated directly from the actual tsunami data without any source information, which may have large estimation errors. In addition to the forecast system, we develop Web services, APIs, and smartphone applications and brush them up through social experiments to provide the real-time tsunami observation and forecast information in easy way to understand toward urging people to evacuate.

  5. Forecasting the (un)productivity of the 2014 M 6.0 South Napa aftershock sequence

    USGS Publications Warehouse

    Llenos, Andrea L.; Michael, Andrew J.

    2017-01-01

    The 24 August 2014 Mw 6.0 South Napa mainshock produced fewer aftershocks than expected for a California earthquake of its magnitude. In the first 4.5 days, only 59 M≥1.8 aftershocks occurred, the largest of which was an M 3.9 that happened a little over two days after the mainshock. We investigate the aftershock productivity of the South Napa sequence and compare it with other M≥5.5 California strike‐slip mainshock–aftershock sequences. While the productivity of the South Napa sequence is among the lowest, northern California mainshocks generally have fewer aftershocks than mainshocks further south, although the productivities vary widely in both regions. An epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988) fit to Napa seismicity from 1980 to 23 August 2014 fits the sequence well and suggests that low‐productivity sequences are typical of this area. Utilizing regional variations in productivity could improve operational earthquake forecasting (OEF) by improving the model used immediately after the mainshock. We show this by comparing the daily rate of M≥2 aftershocks to forecasts made with the generic California model (Reasenberg and Jones, 1989; hereafter, RJ89), RJ89 models with productivity updated daily, a generic California ETAS model, an ETAS model based on premainshock seismicity, and ETAS models updated daily following the mainshock. RJ89 models for which only the productivity is updated provide better forecasts than the generic RJ89 California model, and the Napa‐specific ETAS models forecast the aftershock rates more accurately than either generic model. Therefore, forecasts that use localized initial parameters and that rapidly update the productivity may be better for OEF than using a generic model and/or updating all parameters.

  6. An energy dependent earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    Spassiani, I.; Marzocchi, W.

    2017-12-01

    The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.

  7. M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

    USGS Publications Warehouse

    Parsons, Thomas E.

    2006-01-01

     Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.

  8. Two critical tests for the Critical Point earthquake

    NASA Astrophysics Data System (ADS)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    release rate) event(s). Again, the absence of a concrete case history complicates anyone’s ability to make solid inferences. In conclusion, our observations can be considered to be critical tests of the critical point / stress transfer earthquake model. If the expected earthquakes occur, then it is possible that we have a powerful tool. If not, we should contemplate the possibility that this approach has limited predictive capacity and is unsafe in evaluating seismic hazard. The answer is pending and the question is open for discussion.

  9. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    NASA Astrophysics Data System (ADS)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  10. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    NASA Astrophysics Data System (ADS)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  11. Hydraulic Fracturing Completion Volume is Associated with Induced Earthquake Productivity in the Duvernay Play

    NASA Astrophysics Data System (ADS)

    Schultz, R.; Atkinson, G. M.; Eaton, D. W. S.; Gu, Y. J.; Kao, H.

    2017-12-01

    A sharp increase in the frequency of earthquakes near Fox Creek, Alberta began in December 2013 as a result of hydraulic fracturing completions in the Duvernay Formation. Using a newly compiled hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We find that induced earthquakes are associated with pad completions that used larger injection volumes (104-5 m3) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have limited or insignificant correlation with the seismic response. Further findings suggest that geological susceptibilities play a prominent role in seismic productivity, as evidenced by spatial correlations in the seismicity patterns. Together, volume and geological susceptibilities account for 96% of the variability in the induced earthquake rate near Fox Creek. We suggest this result is fit by a modified Gutenberg-Richter earthquake frequency-magnitude distribution which provides a conceptual framework with which to forecast induced seismicity hazard.

  12. Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts

    NASA Astrophysics Data System (ADS)

    Christensen, Hannah; Moroz, Irene; Palmer, Tim

    2015-04-01

    Forecast verification is important across scientific disciplines as it provides a framework for evaluating the performance of a forecasting system. In the atmospheric sciences, probabilistic skill scores are often used for verification as they provide a way of unambiguously ranking the performance of different probabilistic forecasts. In order to be useful, a skill score must be proper -- it must encourage honesty in the forecaster, and reward forecasts which are reliable and which have good resolution. A new score, the Error-spread Score (ES), is proposed which is particularly suitable for evaluation of ensemble forecasts. It is formulated with respect to the moments of the forecast. The ES is confirmed to be a proper score, and is therefore sensitive to both resolution and reliability. The ES is tested on forecasts made using the Lorenz '96 system, and found to be useful for summarising the skill of the forecasts. The European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system (EPS) is evaluated using the ES. Its performance is compared to a perfect statistical probabilistic forecast -- the ECMWF high resolution deterministic forecast dressed with the observed error distribution. This generates a forecast that is perfectly reliable if considered over all time, but which does not vary from day to day with the predictability of the atmospheric flow. The ES distinguishes between the dynamically reliable EPS forecasts and the statically reliable dressed deterministic forecasts. Other skill scores are tested and found to be comparatively insensitive to this desirable forecast quality. The ES is used to evaluate seasonal range ensemble forecasts made with the ECMWF System 4. The ensemble forecasts are found to be skilful when compared with climatological or persistence forecasts, though this skill is dependent on region and time of year.

  13. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  14. Relating stress models of magma emplacement to volcano-tectonic earthquakes

    NASA Astrophysics Data System (ADS)

    Vargas-Bracamontes, D.; Neuberg, J.

    2007-12-01

    Among the various types of seismic signals linked to volcanic processes, volcano-tectonic earthquakes are probably the earliest precursors of volcanic eruptions. Understanding their relationship with magma emplacement can provide insight into the mechanisms of magma transport at depth and assist in the ultimate goal of forecasting eruptions. Volcano-tectonic events have been observed to occur on faults that experience increases in Coulomb stress changes as the result of magma intrusions. To simulate stress changes associated with magmatic injections, we test different models of volcanic sources in an elastic half-space. For each source model, we look at several aspects that influence the stress conditions of the magmatic system such as the regional tectonic setting, the effect of varying the elastic parameters of the media, the evolution of the magma with time, as well as the volume and rheology of the ascending magma.

  15. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J

  16. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  17. A Simple Model for the Earthquake Cycle Combining Self-Organized Criticality with Critical Point Behavior

    NASA Astrophysics Data System (ADS)

    Newman, W. I.; Turcotte, D. L.

    2002-12-01

    We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."

  18. An Integrated Monitoring System of Pre-earthquake Processes in Peloponnese, Greece

    NASA Astrophysics Data System (ADS)

    Karastathis, V. K.; Tsinganos, K.; Kafatos, M.; Eleftheriou, G.; Ouzounov, D.; Mouzakiotis, E.; Papadopoulos, G. A.; Voulgaris, N.; Bocchini, G. M.; Liakopoulos, S.; Aspiotis, T.; Gika, F.; Tselentis, A.; Moshou, A.; Psiloglou, B.

    2017-12-01

    One of the controversial issues in the contemporary seismology is the ability of radon accumulation monitoring to provide reliable earthquake forecasting. Although there are many examples in the literature showing radon increase before earthquakes, skepticism arises from instability of the measurements, false alarms, difficulties in interpretation caused by the weather influence (eg. rainfall) and difficulties on the consideration an irrefutable theoretical background of the phenomenon.We have developed and extensively tested a multi parameter network aimed for studying of the pre-earthquake processes and operating as a part of integrated monitoring system in the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The prototype consists of four components: A real-time monitoring system of Radon accumulation. It consists of three gamma radiation detectors [NaI(Tl) scintillators] A nine-station seismic array to monitor the microseismicity in the offshore area of the Hellenic arc. The processing of the data is based on F-K and beam-forming techniques. Real-time weather monitoring systems for air temperature, relative humidity, precipitation and pressure. Thermal radiation emission from AVHRR/NOAA-18 polar orbit satellite observation. The project revolved around the idea of jointly studying the emission of Radon that has been proven in many cases as a reliable indicator of the possible time of an event, with the accurate location of the foreshock activity detected by the seismic array that can be a more reliable indicator of the possible position of an event. In parallel a satellite thermal anomaly detection technique has been used for monitoring of larger magnitude events (possible indicator for strong events M ≥5.0.). The first year of operations revealed a number of pre-seismic radon variation anomalies before several local earthquakes (M>3.6). The Radon increases systematically before the larger events.Details about the overall performance

  19. Fractal analysis of the spatial distribution of earthquakes along the Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2014-05-01

    The Hellenic Subduction Zone (HSZ) is the most seismically active region in Europe. Many destructive earthquakes have taken place along the HSZ in the past. The evolution of such active regions is expressed through seismicity and is characterized by complex phenomenology. The understanding of the tectonic evolution process and the physical state of subducting regimes is crucial in earthquake prediction. In recent years, there is a growing interest concerning an approach to seismicity based on the science of complex systems (Papadakis et al., 2013; Vallianatos et al., 2012). In this study we calculate the fractal dimension of the spatial distribution of earthquakes along the HSZ and we aim to understand the significance of the obtained values to the tectonic and geodynamic evolution of this area. We use the external seismic sources provided by Papaioannou and Papazachos (2000) to create a dataset regarding the subduction zone. According to the aforementioned authors, we define five seismic zones. Then, we structure an earthquake dataset which is based on the updated and extended earthquake catalogue for Greece and the adjacent areas by Makropoulos et al. (2012), covering the period 1976-2009. The fractal dimension of the spatial distribution of earthquakes is calculated for each seismic zone and for the HSZ as a unified system using the box-counting method (Turcotte, 1997; Robertson et al., 1995; Caneva and Smirnov, 2004). Moreover, the variation of the fractal dimension is demonstrated in different time windows. These spatiotemporal variations could be used as an additional index to inform us about the physical state of each seismic zone. As a precursor in earthquake forecasting, the use of the fractal dimension appears to be a very interesting future work. Acknowledgements Giorgos Papadakis wish to acknowledge the Greek State Scholarships Foundation (IKY). References Caneva, A., Smirnov, V., 2004. Using the fractal dimension of earthquake distributions and the

  20. Automated Magnitude Measures, Earthquake Source Modeling, VFM Discriminant Testing and Summary of Current Research.

    DTIC Science & Technology

    1979-02-01

    jm.. W 112.11111 * I 120 11 11111.258 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANOARDS-19b3-A 0 - SYSTEMS, SCIENCE AND SOFTWARE * SSS-R-79...3933 0AUTOMATED MAGNITUDE MEASURES, EARTHQUAKE SOURCE MODELING, VFM DISCRIMINANT TESTING AND SUMMARY OF CURRENT RESEARCH T. C. BACHE S. M. DAY J. M...VFM DISCRIMINANT . PERFORMING ORG. REPORT NUMBER TESTING AND SUMMARY OF CURRENT RESEARCH SSS-R-79-3933 7. AUTmOR(s) 8. CONTRACT OR GRANT NUMBERtSi T

  1. Characteristics of foreshock activity inferred from the JMA earthquake catalog

    NASA Astrophysics Data System (ADS)

    Tamaribuchi, Koji; Yagi, Yuji; Enescu, Bogdan; Hirano, Shiro

    2018-05-01

    We investigated the foreshock activity characteristics using the Japan Meteorological Agency Unified Earthquake Catalog for the last 20 years. Using the nearest-neighbor distance approach, we systematically and objectively classified the earthquakes into clustered and background seismicity. We further categorized the clustered events into foreshocks, mainshocks, and aftershocks and analyzed their statistical features such as the b-value of the frequency-magnitude distribution. We found that the b-values of the foreshocks are lower than those of the aftershocks. This b-value difference suggested that not only the stochastic cascade effect but also the stress changes/aseismic processes may contribute to the mainshock-triggering process. However, forecasting the mainshock based on b-value analysis may be difficult. In addition, the rate of foreshock occurrence in all clusters (with two or more events) was nearly constant (30-40%) over a wide magnitude range. The difference in the magnitude, time, and epicentral distance between the mainshock and largest foreshock followed a power law. We inferred that the distinctive characteristics of foreshocks can be better revealed using the improved catalog, which includes the micro-earthquake information.

  2. Dynamics of folding: Impact of fault bend folds on earthquake cycles

    NASA Astrophysics Data System (ADS)

    Sathiakumar, S.; Barbot, S.; Hubbard, J.

    2017-12-01

    Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the

  3. Validation Test Report for the 1/8 deg Global Navy Coastal Ocean Model Nowcast/Forecast System

    DTIC Science & Technology

    2007-01-24

    Test Report for the 1/8° Global Navy Coastal Ocean Model Nowcast/Forecast System Charlie N. BarroN a. Birol Kara roBert C. rhodes ClarK rowley......OF ACRONYMS ......................................................................48 VALIDATION TEST REPORT FOR THE 1/8° GLOBAL NAVY COASTAL

  4. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    NASA Astrophysics Data System (ADS)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  5. Coulomb Failure Stress Accumulation in Nepal After the 2015 Mw 7.8 Gorkha Earthquake: Testing Earthquake Triggering Hypothesis and Evaluating Seismic Hazards

    NASA Astrophysics Data System (ADS)

    Xiong, N.; Niu, F.

    2017-12-01

    A Mw 7.8 earthquake struck Gorkha, Nepal, on April 5, 2015, resulting in more than 8000 deaths and 3.5 million homeless. The earthquake initiated 70km west of Kathmandu and propagated eastward, rupturing an area of approximately 150km by 60km in size. However, the earthquake failed to fully rupture the locked fault beneath the Himalaya, suggesting that the region south of Kathmandu and west of the current rupture are still locked and a much more powerful earthquake might occur in future. Therefore, the seismic hazard of the unruptured region is of great concern. In this study, we investigated the Coulomb failure stress (CFS) accumulation on the unruptured fault transferred by the Gorkha earthquake and some nearby historical great earthquakes. First, we calculated the co-seismic CFS changes of the Gorkha earthquake on the nodal planes of 16 large aftershocks to quantitatively examine whether they were brought closer to failure by the mainshock. It is shown that at least 12 of the 16 aftershocks were encouraged by an increase of CFS of 0.1-3 MPa. The correspondence between the distribution of off-fault aftershocks and the increased CFS pattern also validates the applicability of the earthquake triggering hypothesis in the thrust regime of Nepal. With the validation as confidence, we calculated the co-seismic CFS change on the locked region imparted by the Gorkha earthquake and historical great earthquakes. A newly proposed ramp-flat-ramp-flat fault geometry model was employed, and the source parameters of historical earthquakes were computed with the empirical scaling relationship. A broad region south of the Kathmandu and west of the current rupture were shown to be positively stressed with CFS change roughly ranging between 0.01 and 0.5 MPa. The maximum of CFS increase (>1MPa) was found in the updip segment south of the current rupture, implying a high seismic hazard. Since the locked region may be additionally stressed by the post-seismic relaxation of the lower

  6. Forecasting probabilistic seismic shaking for greater Tokyo from 400 years of intensity observations

    USGS Publications Warehouse

    Bozkurt, S.B.; Stein, R.S.; Toda, S.

    2007-01-01

    The long recorded history of earthquakes in Japan affords an opportunity to forecast seismic shaking exclusively from past shaking. We calculate the time-averaged (Poisson) probability of severe shaking by using more than 10,000 intensity observations recorded since AD 1600 in a 350 km-wide box centered on Tokyo. Unlike other hazard-assessment methods, source and site effects are included without modeling, and we do not need to know the size or location of any earthquake nor the location and slip rate of any fault. The two key assumptions are that the slope of the observed frequency-intensity relation at every site is the same, and that the 400-year record is long enough to encompass the full range of seismic behavior. Tests we conduct here suggest that both assumptions are sound. The resulting 30-year probability of IJMA ??? 6 shaking (??? PGA ??? 0.4 g or MMI ??? IX) is 30%-40% in Tokyo, Kawasaki, and Yokohama, and 10% 15% in Chiba and Tsukuba. This result means that there is a 30% chance that 4 million people will be subjected to IJMA ??? 6 shaking during an average 30-year period. We also produce exceedance maps of PGA for building-code regulations, and calculate short-term hazard associated with a hypothetical catastrophe bond. Our results resemble an independent assessment developed from conventional seismic hazard analysis for greater Tokyo. ?? 2007, Earthquake Engineering Research Institute.

  7. A prototype operational earthquake loss model for California based on UCERF3-ETAS – A first look at valuation

    USGS Publications Warehouse

    Field, Edward; Porter, Keith; Milner, Kevn

    2017-01-01

    We present a prototype operational loss model based on UCERF3-ETAS, which is the third Uniform California Earthquake Rupture Forecast with an Epidemic Type Aftershock Sequence (ETAS) component. As such, UCERF3-ETAS represents the first earthquake forecast to relax fault segmentation assumptions and to include multi-fault ruptures, elastic-rebound, and spatiotemporal clustering, all of which seem important for generating realistic and useful aftershock statistics. UCERF3-ETAS is nevertheless an approximation of the system, however, so usefulness will vary and potential value needs to be ascertained in the context of each application. We examine this question with respect to statewide loss estimates, exemplifying how risk can be elevated by orders of magnitude due to triggered events following various scenario earthquakes. Two important considerations are the probability gains, relative to loss likelihoods in the absence of main shocks, and the rapid decay of gains with time. Significant uncertainties and model limitations remain, so we hope this paper will inspire similar analyses with respect to other risk metrics to help ascertain whether operationalization of UCERF3-ETAS would be worth the considerable resources required.

  8. Forecasting forecast skill

    NASA Technical Reports Server (NTRS)

    Kalnay, Eugenia; Dalcher, Amnon

    1987-01-01

    It is shown that it is possible to predict the skill of numerical weather forecasts - a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite-data-impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems. When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such a large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when regional verifications were used, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.

  9. Integrated study of geophysical and biological anomalies before earthquakes (seismic and non-seismic), in Austria and Indonesia

    NASA Astrophysics Data System (ADS)

    Straka, Wolfgang; Assef, Rizkita; Faber, Robert; Ferasyi, Reza

    2015-04-01

    Earthquakes are commonly seen as unpredictable. Even when scientists believe an earthquake is likely, it is still hard to understand the indications observed, as well as their theoretical and practical implications. There is some controversy surrounding the concept of using animals as a precursor of earthquakes. Nonetheless, several institutes at University of Natural Resources and Life Sciences, and Vienna University of Technology, both Vienna, Austria, and Syiah Kuala University, Banda Aceh, as well as Terramath Indonesia, Buleleng, both Indonesia, cooperate in a long-term project, funded by Red Bull Media House, Salzburg, Austria, which aims at getting some decisive step forward from anecdotal to scientific evidence of those interdependencies, and show their possible use in forecasting seismic hazard on a short-term basis. Though no conclusive research has yet been published, an idea in this study is that even if animals do not respond to specific geophysical precursors and with enough notice to enable earthquake forecasting on that basis, they may at least enhance, in conjunction with other indications, the degree of certainty we can get of a prediction of an impending earthquake. In Indonesia, indeed, before the great earthquakes of 2004 and 2005, ominous geophysical as well as biological phenomena occurred (but were realized as precursors only in retrospect). Numerous comparable stories can be told from other times and regions. Nearly 2000 perceptible earthquakes (> M3.5) occur each year in Indonesia. Also, in 2007, the government has launched a program, focused on West Sumatra, for investigating earthquake precursors. Therefore, Indonesia is an excellent target area for a study concerning possible interconnections between geophysical and biological earthquake precursors. Geophysical and atmospheric measurements and behavioral observation of several animal species (elephant, domestic cattle, water buffalo, chicken, rat, catfish) are conducted in three areas

  10. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  11. A simple Lagrangian forecast system with aviation forecast potential

    NASA Technical Reports Server (NTRS)

    Petersen, R. A.; Homan, J. H.

    1983-01-01

    A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.

  12. The Ordered Network Structure and Prediction Summary for M≥7 Earthquakes in Xinjiang Region of China

    NASA Astrophysics Data System (ADS)

    Men, Ke-Pei; Zhao, Kai

    2014-12-01

    M ≥7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a × k (k = 1,2,3), 11 12 a, 41 43 a, 18 19 a, and 5 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019 - 2020 and 2025 - 2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  13. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  14. Testing for the 'predictability' of dynamically triggered earthquakes in The Geysers geothermal field

    NASA Astrophysics Data System (ADS)

    Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne

    2018-03-01

    The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is 'predictable' or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily 'predictable' in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock-aftershock sequences. Thus, we may be able to 'predict' what size earthquakes to expect at The Geysers following a large distant earthquake.

  15. Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.

  16. Earthquake Early Warning in Japan - Result of recent two years -

    NASA Astrophysics Data System (ADS)

    Shimoyama, T.; Doi, K.; Kiyomoto, M.; Hoshiba, M.

    2009-12-01

    Japan Meteorological Agency(JMA) started to provide Earthquake Early Warning(EEW) to the general public in October 2007. It was followed by provision of EEW to a limited number of users who understand the technical limit of EEW and can utilize it for automatic control from August 2006. Earthquake Early Warning in Japan definitely means information of estimated amplitude and arrival time of a strong ground motion after fault rupture occurred. In other words, the EEW provided by JMA is defined as a forecast of a strong ground motion before the strong motion arrival. EEW of JMA is to enable advance countermeasures to disasters caused by strong ground motions with providing a warning message of anticipating strong ground motion before the S wave arrival. However, due to its very short available time period, there should need some measures and ideas to provide rapidly EEW and utilize it properly. - EEW is issued to general public when the maximum seismic intensity 5 lower (JMA scale) or greater is expected. - EEW message contains origin time, epicentral region name, and names of areas (unit is about 1/3 to 1/4 of one prefecture) where seismic intensity 4 or greater is expected. Expected arrival time is not included because it differs substantially even in one unit area. - EEW is to be broadcast through the broadcasting media(TV, radio and City Administrative Disaster Management Radio), and is delivered to cellular phones through cell broadcast system. For those who would like to know the more precise estimation and smaller earthquake information at their point of their properties, JMA allows designated private companies to provide forecast of strong ground motion, in which the estimation of a seismic intensity as well as arrival time of S-wave are contained, at arbitrary places under the JMA’s technical assurance. From October, 2007 to August, 2009, JMA issued 11 warnings to general public expecting seismic intensity “5 lower” or greater, including M=7.2 inland

  17. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  18. Application of Geostatistical Methods and Machine Learning for spatio-temporal Earthquake Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Schaefer, A. M.; Daniell, J. E.; Wenzel, F.

    2014-12-01

    Earthquake clustering tends to be an increasingly important part of general earthquake research especially in terms of seismic hazard assessment and earthquake forecasting and prediction approaches. The distinct identification and definition of foreshocks, aftershocks, mainshocks and secondary mainshocks is taken into account using a point based spatio-temporal clustering algorithm originating from the field of classic machine learning. This can be further applied for declustering purposes to separate background seismicity from triggered seismicity. The results are interpreted and processed to assemble 3D-(x,y,t) earthquake clustering maps which are based on smoothed seismicity records in space and time. In addition, multi-dimensional Gaussian functions are used to capture clustering parameters for spatial distribution and dominant orientations. Clusters are further processed using methodologies originating from geostatistics, which have been mostly applied and developed in mining projects during the last decades. A 2.5D variogram analysis is applied to identify spatio-temporal homogeneity in terms of earthquake density and energy output. The results are mitigated using Kriging to provide an accurate mapping solution for clustering features. As a case study, seismic data of New Zealand and the United States is used, covering events since the 1950s, from which an earthquake cluster catalogue is assembled for most of the major events, including a detailed analysis of the Landers and Christchurch sequences.

  19. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  20. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  1. Development and application of an atmospheric-hydrologic-hydraulic flood forecasting model driven by TIGGE ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Bao, Hongjun; Zhao, Linna

    2012-02-01

    A coupled atmospheric-hydrologic-hydraulic ensemble flood forecasting model, driven by The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) data, has been developed for flood forecasting over the Huaihe River. The incorporation of numerical weather prediction (NWP) information into flood forecasting systems may increase forecast lead time from a few hours to a few days. A single NWP model forecast from a single forecast center, however, is insufficient as it involves considerable non-predictable uncertainties and leads to a high number of false alarms. The availability of global ensemble NWP systems through TIGGE offers a new opportunity for flood forecast. The Xinanjiang model used for hydrological rainfall-runoff modeling and the one-dimensional unsteady flow model applied to channel flood routing are coupled with ensemble weather predictions based on the TIGGE data from the Canadian Meteorological Centre (CMC), the European Centre for Medium-Range Weather Forecasts (ECMWF), the UK Met Office (UKMO), and the US National Centers for Environmental Prediction (NCEP). The developed ensemble flood forecasting model is applied to flood forecasting of the 2007 flood season as a test case. The test case is chosen over the upper reaches of the Huaihe River above Lutaizi station with flood diversion and retarding areas. The input flood discharge hydrograph from the main channel to the flood diversion area is estimated with the fixed split ratio of the main channel discharge. The flood flow inside the flood retarding area is calculated as a reservoir with the water balance method. The Muskingum method is used for flood routing in the flood diversion area. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE ensemble forecasts. The results demonstrate satisfactory flood forecasting with clear signals of probability of floods up to a

  2. Triggering of repeating earthquakes in central California

    USGS Publications Warehouse

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul

    2014-01-01

    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  3. Increasing critical sensitivity of the Load/Unload Response Ratio before large earthquakes with identified stress accumulation pattern

    NASA Astrophysics Data System (ADS)

    Yu, Huai-zhong; Shen, Zheng-kang; Wan, Yong-ge; Zhu, Qing-yong; Yin, Xiang-chu

    2006-12-01

    The Load/Unload Response Ratio (LURR) method is proposed for short-to-intermediate-term earthquake prediction [Yin, X.C., Chen, X.Z., Song, Z.P., Yin, C., 1995. A New Approach to Earthquake Prediction — The Load/Unload Response Ratio (LURR) Theory, Pure Appl. Geophys., 145, 701-715]. This method is based on measuring the ratio between Benioff strains released during the time periods of loading and unloading, corresponding to the Coulomb Failure Stress change induced by Earth tides on optimally oriented faults. According to the method, the LURR time series usually climb to an anomalously high peak prior to occurrence of a large earthquake. Previous studies have indicated that the size of critical seismogenic region selected for LURR measurements has great influence on the evaluation of LURR. In this study, we replace the circular region usually adopted in LURR practice with an area within which the tectonic stress change would mostly affect the Coulomb stress on a potential seismogenic fault of a future event. The Coulomb stress change before a hypothetical earthquake is calculated based on a simple back-slip dislocation model of the event. This new algorithm, by combining the LURR method with our choice of identified area with increased Coulomb stress, is devised to improve the sensitivity of LURR to measure criticality of stress accumulation before a large earthquake. Retrospective tests of this algorithm on four large earthquakes occurred in California over the last two decades show remarkable enhancement of the LURR precursory anomalies. For some strong events of lesser magnitudes occurred in the same neighborhoods and during the same time periods, significant anomalies are found if circular areas are used, and are not found if increased Coulomb stress areas are used for LURR data selection. The unique feature of this algorithm may provide stronger constraints on forecasts of the size and location of future large events.

  4. Predicted Surface Displacements for Scenario Earthquakes in the San Francisco Bay Region

    USGS Publications Warehouse

    Murray-Moraleda, Jessica R.

    2008-01-01

    In the immediate aftermath of a major earthquake, the U.S. Geological Survey (USGS) will be called upon to provide information on the characteristics of the event to emergency responders and the media. One such piece of information is the expected surface displacement due to the earthquake. In conducting probabilistic hazard analyses for the San Francisco Bay Region, the Working Group on California Earthquake Probabilities (WGCEP) identified a series of scenario earthquakes involving the major faults of the region, and these were used in their 2003 report (hereafter referred to as WG03) and the recently released 2008 Uniform California Earthquake Rupture Forecast (UCERF). Here I present a collection of maps depicting the expected surface displacement resulting from those scenario earthquakes. The USGS has conducted frequent Global Positioning System (GPS) surveys throughout northern California for nearly two decades, generating a solid baseline of interseismic measurements. Following an earthquake, temporary GPS deployments at these sites will be important to augment the spatial coverage provided by continuous GPS sites for recording postseismic deformation, as will the acquisition of Interferometric Synthetic Aperture Radar (InSAR) scenes. The information provided in this report allows one to anticipate, for a given event, where the largest displacements are likely to occur. This information is valuable both for assessing the need for further spatial densification of GPS coverage before an event and prioritizing sites to resurvey and InSAR data to acquire in the immediate aftermath of the earthquake. In addition, these maps are envisioned to be a resource for scientists in communicating with emergency responders and members of the press, particularly during the time immediately after a major earthquake before displacements recorded by continuous GPS stations are available.

  5. Earthquake triggering by seismic waves following the landers and hector mine earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.A.; Bodin, P.; Harris, R.A.

    2001-01-01

    The proximity and similarity of the 1992, magnitude 7.3 Landers and 1999, magnitude 7.1 Hector Mine earthquakes in California permit testing of earthquake triggering hypotheses not previously possible. The Hector Mine earthquake confirmed inferences that transient, oscillatory 'dynamic' deformations radiated as seismic waves can trigger seismicity rate increases, as proposed for the Landers earthquake1-6. Here we quantify the spatial and temporal patterns of the seismicity rate changes7. The seismicity rate increase was to the north for the Landers earthquake and primarily to the south for the Hector Mine earthquake. We suggest that rupture directivity results in elevated dynamic deformations north and south of the Landers and Hector Mine faults, respectively, as evident in the asymmetry of the recorded seismic velocity fields. Both dynamic and static stress changes seem important for triggering in the near field with dynamic stress changes dominating at greater distances. Peak seismic velocities recorded for each earthquake suggest the existence of, and place bounds on, dynamic triggering thresholds. These thresholds vary from a few tenths to a few MPa in most places, depend on local conditions, and exceed inferred static thresholds by more than an order of magnitude. At some sites, the onset of triggering was delayed until after the dynamic deformations subsided. Physical mechanisms consistent with all these observations may be similar to those that give rise to liquefaction or cyclic fatigue.

  6. Design and Prototype Implementation of non-Triggered Database-driven Real-time Tsunami Forecast System using Multi-index Method

    NASA Astrophysics Data System (ADS)

    Yamamoto, N.; Aoi, S.; Suzuki, W.; Hirata, K.; Takahashi, N.; Kunugi, T.; Nakamura, H.

    2016-12-01

    We have launched a new project to develop real-time tsunami inundation forecast system for the Pacific coast of Chiba prefecture (Kujukuri-Sotobo region), Japan (Aoi et al., 2015, AGU). In this study, we design a database-driven real-time tsunami forecast system using the multi-index method (Yamamoto et al., 2016, EPS) and implement a prototype system. In the previous study (Yamamoto et al., 2015, AGU), we assumed that the origin-time of tsunami was known before a forecast based on comparing observed and calculated ocean-bottom pressure waveforms stored in the Tsunami Scenario Bank (TSB). As shown in the figure, we assume the scenario origin-times by defining the scenario elapsed timeτp to compare observed and calculated waveforms. In this design, when several appropriate tsunami scenarios were selected by multiple indices (two variance reductions and correlation coefficient), the system could make tsunami forecast using the selected tsunami scenarios for the target coastal region without any triggered information derived from observed seismic and/or tsunami data. In addition, we define the time range Tq shown in the figure for masking perturbations contaminated by ocean-acoustic and seismic waves on the observed pressure records (Saito, 2015, JpGU). Following the proposed design, we implement a prototype system of real-time tsunami inundation forecast system for the exclusive use of the target coastal region using ocean-bottom pressure data from the Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench (S-net) (Kanazawa et al., 2012, JpGU; Uehira et al., 2015, IUGG), which is constructed by National Research institute for Earth Science and Disaster Resilience (NIED). For the prototype system, we construct a prototype TSB using interplate earthquake fault models located along the Japan Trench (Mw 7.6-9.8), the Sagami Trough (Mw 7.6-8.6), and the Nankai Trough (Mw 7.6-8.6) as well as intraplate earthquake fault models (Mw 7.6-8.6) within

  7. Testing for the ‘predictability’ of dynamically triggered earthquakes in Geysers Geothermal Field

    USGS Publications Warehouse

    Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne L.

    2018-01-01

    The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is ‘predictable’ or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily ‘predictable’ in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock–aftershock sequences. Thus, we may be able to ‘predict’ what size earthquakes to expect at The Geysers following a large distant earthquake.

  8. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed Central

    Kanamori, H

    1996-01-01

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657

  9. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  10. Earthquakes and sea level - Space and terrestrial metrology on a changing planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilham, R.

    1991-02-01

    A review is presented of the stability and scale of crustal deformation metrology which has particular relevance to monitoring deformation associated with sea level and earthquakes. Developments in space geodesy and crustal deformation metrology in the last two decades have the potential to acquire a homogeneous global data set for monitoring relative horizontal and vertical motions of the earth's surface to within several millimeters. New tools discussed for forecasting sea level rise and damaging earthquakes include: very long baseline interferometry, satellite laser ranging, the principles of GPS geodesy, and new sea level sensors. Space geodesy permits a unified global basismore » for future metrology of the earth, and the continued availability of the GPS is currently fundamental to this unification.« less

  11. Sun, Moon and Earthquakes

    NASA Astrophysics Data System (ADS)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  12. Uncertainty forecasts improve weather-related decisions and attenuate the effects of forecast error.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2012-03-01

    Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather warning system is used. The work reported here tested the relative benefits of several forecast formats, comparing decisions made with and without uncertainty forecasts. In three experiments, participants assumed the role of a manager of a road maintenance company in charge of deciding whether to pay to salt the roads and avoid a potential penalty associated with icy conditions. Participants used overnight low temperature forecasts accompanied in some conditions by uncertainty estimates and in others by decision advice comparable to categorical warnings. Results suggested that uncertainty information improved decision quality overall and increased trust in the forecast. Participants with uncertainty forecasts took appropriate precautionary action and withheld unnecessary action more often than did participants using deterministic forecasts. When error in the forecast increased, participants with conventional forecasts were reluctant to act. However, this effect was attenuated by uncertainty forecasts. Providing categorical decision advice alone did not improve decisions. However, combining decision advice with uncertainty estimates resulted in the best performance overall. The results reported here have important implications for the development of forecast formats to increase compliance with severe weather warnings as well as other domains in which one must act in the face of uncertainty. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  13. Real-time Ensemble Flow Forecasts for a 2017 Mock Operation Test Trial of Forecast Informed Reservoir Operations for Lake Mendocino in Mendocino County, California

    NASA Astrophysics Data System (ADS)

    Delaney, C.; Mendoza, J.; Jasperse, J.; Hartman, R. K.; Whitin, B.; Kalansky, J.

    2017-12-01

    Forecast informed reservoir operations (FIRO) is a methodology that incorporates short to mid-range precipitation and flow forecasts to inform the flood operations of reservoirs. The Ensemble Forecast Operations (EFO) alternative is a probabilistic approach of FIRO that incorporates 15-day ensemble streamflow predictions (ESPs) made by NOAA's California-Nevada River Forecast Center (CNRFC). With the EFO approach, release decisions are made to manage forecasted risk of reaching critical operational thresholds. A water management model was developed for Lake Mendocino, a 111,000 acre-foot reservoir located near Ukiah, California, to conduct a mock operation test trial of the EFO alternative for 2017. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United States Army Corps of Engineers and is operated for water supply by the Sonoma County Water Agency. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has suffered from water supply reliability issues since 2007. The operational trial utilized real-time ESPs prepared by the CNRFC and observed flow information to simulate hydrologic conditions in Lake Mendocino and a 50-mile downstream reach of the Russian River to the City of Healdsburg. Results of the EFO trial demonstrate a 6% increase in reservoir storage at the end of trial period (May 10) relative to observed conditions. Additionally, model results show no increase in flows above flood stage for points downstream of Lake Mendocino. Results of this investigation and other studies demonstrate that the EFO alternative may be a viable flood control operations approach for Lake Mendocino and warrants further investigation through additional modeling and analysis.

  14. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  15. A Test Case for the Source Inversion Validation: The 2014 ML 5.5 Orkney, South Africa Earthquake

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.; Ogasawara, H.; Boettcher, M. S.

    2017-12-01

    The ML5.5 earthquake of August 5, 2014 occurred on a near-vertical strike slip fault below abandoned and active gold mines near Orkney, South Africa. A dense network of surface and in-mine seismometers recorded the earthquake and its aftershock sequence. In-situ stress measurements and rock samples through the damage zone and rupture surface are anticipated to be available from the "Drilling into Seismogenic Zones of M2.0-M5.5 Earthquakes in South African gold mines" project (DSeis) that is currently progressing toward the rupture zone (Science, doi: 10.1126/science.aan6905). As of 24 July, 95% of drilled core has been recovered from a 427m-section of the 1st hole from 2.9 km depth with minimal core discing and borehole breakouts. A 2nd hole is planned to intersect the fault at greater depth. Absolute differential stress will be measured along the holes and frictional characteristics of the recovered core will be determined in the lab. Surface seismic reflection data and exploration drilling from the surface down to the mining horizon at 3km depth is also available to calibrate the velocity structure above the mining horizon and image reflective geological boundaries and major faults below the mining horizon. The remarkable quality and range of geophysical data available for the Orkney earthquake makes this event an ideal test case for the Source Inversion Validation community using actual seismic data to determine the spatial and temporal evolution of earthquake rupture. We invite anyone with an interest in kinematic modeling to develop a rupture model for the Orkney earthquake. Seismic recordings of the earthquake and information on the faulting geometry can be found in Moyer et al. (2017, doi: 10.1785/0220160218). A workshop supported by the Southern California Earthquake Center will be held in the spring of 2018 to compare kinematic models. Those interested in participating in the modeling exercise and the workshop should contact the authors for additional

  16. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  17. Hydraulic fracturing volume is associated with induced earthquake productivity in the Duvernay play.

    PubMed

    Schultz, R; Atkinson, G; Eaton, D W; Gu, Y J; Kao, H

    2018-01-19

    A sharp increase in the frequency of earthquakes near Fox Creek, Alberta, began in December 2013 in response to hydraulic fracturing. Using a hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We show that induced earthquakes are associated with completions that used larger injection volumes (10 4 to 10 5 cubic meters) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have an insignificant association with seismic response. Further findings suggest that geological factors play a prominent role in seismic productivity, as evidenced by spatial correlations. Together, volume and geological factors account for ~96% of the variability in the induced earthquake rate near Fox Creek. This result is quantified by a seismogenic index-modified frequency-magnitude distribution, providing a framework to forecast induced seismicity. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  18. Intermediate-term forecasting of aftershocks from an early aftershock sequence: Bayesian and ensemble forecasting approaches

    NASA Astrophysics Data System (ADS)

    Omi, Takahiro; Ogata, Yosihiko; Hirata, Yoshito; Aihara, Kazuyuki

    2015-04-01

    Because aftershock occurrences can cause significant seismic risks for a considerable time after the main shock, prospective forecasting of the intermediate-term aftershock activity as soon as possible is important. The epidemic-type aftershock sequence (ETAS) model with the maximum likelihood estimate effectively reproduces general aftershock activity including secondary or higher-order aftershocks and can be employed for the forecasting. However, because we cannot always expect the accurate parameter estimation from incomplete early aftershock data where many events are missing, such forecasting using only a single estimated parameter set (plug-in forecasting) can frequently perform poorly. Therefore, we here propose Bayesian forecasting that combines the forecasts by the ETAS model with various probable parameter sets given the data. By conducting forecasting tests of 1 month period aftershocks based on the first 1 day data after the main shock as an example of the early intermediate-term forecasting, we show that the Bayesian forecasting performs better than the plug-in forecasting on average in terms of the log-likelihood score. Furthermore, to improve forecasting of large aftershocks, we apply a nonparametric (NP) model using magnitude data during the learning period and compare its forecasting performance with that of the Gutenberg-Richter (G-R) formula. We show that the NP forecast performs better than the G-R formula in some cases but worse in other cases. Therefore, robust forecasting can be obtained by employing an ensemble forecast that combines the two complementary forecasts. Our proposed method is useful for a stable unbiased intermediate-term assessment of aftershock probabilities.

  19. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  20. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  1. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    DTIC Science & Technology

    2008-09-01

    explosions from earthquakes, using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites ...Battone et al., 2002). For example, in Figure 1 we compare an earthquake and an explosion at each of four major test sites (rows), bandpass filtered...explosions as the frequency increases. Note also there are interesting differences between the test sites , indicating that emplacement conditions (depth

  2. Monitoring and Modeling: The Future of Volcanic Eruption Forecasting

    NASA Astrophysics Data System (ADS)

    Poland, M. P.; Pritchard, M. E.; Anderson, K. R.; Furtney, M.; Carn, S. A.

    2016-12-01

    Eruption forecasting typically uses monitoring data from geology, gas geochemistry, geodesy, and seismology, to assess the likelihood of future eruptive activity. Occasionally, months to years of warning are possible from specific indicators (e.g., deep LP earthquakes, elevated CO2 emissions, and aseismic deformation) or a buildup in one or more monitoring parameters. More often, observable changes in unrest occur immediately before eruption, as magma is rising toward the surface. In some cases, little or no detectable unrest precedes eruptive activity. Eruption forecasts are usually based on the experience of volcanologists studying the activity, but two developing fields offer a potential leap beyond this practice. First, remote sensing data, which can track thermal, gas, and ash emissions, as well as surface deformation, are increasingly available, allowing statistically significant research into the characteristics of unrest. For example, analysis of hundreds of volcanoes indicates that deformation is a more common pre-eruptive phenomenon than thermal anomalies, and that most episodes of satellite-detected unrest are not immediately followed by eruption. Such robust datasets inform the second development—probabilistic models of eruption potential, especially those that are based on physical-chemical models of the dynamics of magma accumulation and ascent. Both developments are essential for refining forecasts and reducing false positives. For example, many caldera systems have not erupted but are characterized by unrest that, in another context, would elicit strong concern from volcanologists. More observations of this behavior and better understanding of the underlying physics of unrest will improve forecasts of such activity. While still many years from implementation as a forecasting tool, probabilistic physio-chemical models incorporating satellite data offer a complement to expert assessments that, together, can form a powerful forecasting approach.

  3. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  4. 12 May 2008 M = 7.9 Wenchuan, China, earthquake calculated to increase failure stress and seismicity rate on three major fault systems

    USGS Publications Warehouse

    Toda, S.; Lin, J.; Meghraoui, M.; Stein, R.S.

    2008-01-01

    The Wenchuan earthquake on the Longmen Shan fault zone devastated cities of Sichuan, claiming at least 69,000 lives. We calculate that the earthquake also brought the Xianshuihe, Kunlun and Min Jiang faults 150-400 km from the mainshock rupture in the eastern Tibetan Plateau 0.2-0.5 bars closer to Coulomb failure. Because some portions of these stressed faults have not ruptured in more than a century, the earthquake could trigger or hasten additional M > 7 earthquakes, potentially subjecting regions from Kangding to Daofu and Maqin to Rangtag to strong shaking. We use the calculated stress changes and the observed background seismicity to forecast the rate and distribution of damaging shocks. The earthquake probability in the region is estimated to be 57-71% for M ??? 6 shocks during the next decade, and 8-12% for M ??? 7 shocks. These are up to twice the probabilities for the decade before the Wenchuan earthquake struck. Copyright 2008 by the American Geophysical Union.

  5. Induced earthquake magnitudes are as large as (statistically) expected

    USGS Publications Warehouse

    Van Der Elst, Nicholas; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas; Hosseini, S. Mehran

    2016-01-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  6. Collapse and Earthquake Swarm After North Korea's 3 September 2017 Nuclear Test

    NASA Astrophysics Data System (ADS)

    Tian, Dongdong; Yao, Jiayuan; Wen, Lianxing

    2018-05-01

    North Korea's 3 September 2017 nuclear test was followed by several small seismic events, with one eight-and-a-half minutes after the test and three on and after 23 September 2017. Seismic analysis reveals that the first event is a near vertical on-site collapse toward the nuclear test center from 440 ± 260 m northwest of the test site, with its seismic source best represented by a single force with a dip angle of 70°-75° and an azimuth of 150°, and the later events are an earthquake swarm located 8.4 ± 1.7 km north of the test site within a region of 520 m, with a focal depth of at least 2.4 km and a focal mechanism of nearly pure strike slip along the north-south direction with a high dip angle of 50°-90°. The occurrence of the on-site collapse calls for continued monitoring of any leaks of radioactive materials from the test site.

  7. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  8. Holocene behavior of the Brigham City segment: implications for forecasting the next large-magnitude earthquake on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Personius, Stephen F.; DuRoss, Christopher B.; Crone, Anthony J.

    2012-01-01

    The Brigham City segment (BCS), the northernmost Holocene‐active segment of the Wasatch fault zone (WFZ), is considered a likely location for the next big earthquake in northern Utah. We refine the timing of the last four surface‐rupturing (~Mw 7) earthquakes at several sites near Brigham City (BE1, 2430±250; BE2, 3490±180; BE3, 4510±530; and BE4, 5610±650 cal yr B.P.) and calculate mean recurrence intervals (1060–1500  yr) that are greatly exceeded by the elapsed time (~2500  yr) since the most recent surface‐rupturing earthquake (MRE). An additional rupture observed at the Pearsons Canyon site (PC1, 1240±50 cal yr B.P.) near the southern segment boundary is probably spillover rupture from a large earthquake on the adjacent Weber segment. Our seismic moment calculations show that the PC1 rupture reduced accumulated moment on the BCS about 22%, a value that may have been enough to postpone the next large earthquake. However, our calculations suggest that the segment currently has accumulated more than twice the moment accumulated in the three previous earthquake cycles, so we suspect that additional interactions with the adjacent Weber segment contributed to the long elapse time since the MRE on the BCS. Our moment calculations indicate that the next earthquake is not only overdue, but could be larger than the previous four earthquakes. Displacement data show higher rates of latest Quaternary slip (~1.3  mm/yr) along the southern two‐thirds of the segment. The northern third likely has experienced fewer or smaller ruptures, which suggests to us that most earthquakes initiate at the southern segment boundary.

  9. Earthquakes induced by fluid injection and explosion

    USGS Publications Warehouse

    Healy, J.H.; Hamilton, R.M.; Raleigh, C.B.

    1970-01-01

    Earthquakes generated by fluid injection near Denver, Colorado, are compared with earthquakes triggered by nuclear explosion at the Nevada Test Site. Spatial distributions of the earthquakes in both cases are compatible with the hypothesis that variation of fluid pressure in preexisting fractures controls the time distribution of the seismic events in an "aftershock" sequence. We suggest that the fluid pressure changes may also control the distribution in time and space of natural aftershock sequences and of earthquakes that have been reported near large reservoirs. ?? 1970.

  10. Insignificant solar-terrestrial triggering of earthquakes

    USGS Publications Warehouse

    Love, Jeffrey J.; Thomas, Jeremy N.

    2013-01-01

    We examine the claim that solar-terrestrial interaction, as measured by sunspots, solar wind velocity, and geomagnetic activity, might play a role in triggering earthquakes. We count the number of earthquakes having magnitudes that exceed chosen thresholds in calendar years, months, and days, and we order these counts by the corresponding rank of annual, monthly, and daily averages of the solar-terrestrial variables. We measure the statistical significance of the difference between the earthquake-number distributions below and above the median of the solar-terrestrial averages by χ2 and Student's t tests. Across a range of earthquake magnitude thresholds, we find no consistent and statistically significant distributional differences. We also introduce time lags between the solar-terrestrial variables and the number of earthquakes, but again no statistically significant distributional difference is found. We cannot reject the null hypothesis of no solar-terrestrial triggering of earthquakes.

  11. USGS: Science to understand and forecast change in coastal ecosystems

    USGS Publications Warehouse

    Myers, M.

    2007-01-01

    The multidisciplinary approach of the US Geological Survey (USGS), a principal science agency of the US Department of the Interior (DOI), to address the complex and cumulative impacts of human activities and natural events on the US coastal ecosystems has been considered remarkable for understanding and forecasting the changes. The USGS helps explain geologic, hydrologic, and biologic systems and their connectivity across landscapes and seascapes along the coastline. The USGS coastal science programs effectively address science and information to other scientists, managers, policy makers, and the public. The USGS provides scientific expertise, capabilities, and services to collaborative federal, regional, and state-led efforts, which are in line with the goals of Ocean Action Plan (OAP) and Ocean Research Priorities Plan (ORPP). The organization is a leader in understanding terrestrial and marine environmental hazards such as earthquakes, tsunamis, floods, and landslides and assessing and forecasting coastal impacts using various specialized visualization techniques.

  12. Tsunami field survey in French Polynesia of the 2015 Chilean earthquake Mw = 8.2 and what we learned.

    NASA Astrophysics Data System (ADS)

    Jamelot, Anthony; Reymond, Dominique; Savigny, Jonathan; Hyvernaud, Olivier

    2016-04-01

    The tsunami generated by the earthquake of magnitude Mw=8.2 near the coast of central Chile on the 16th September 2015 was observed on 7 tide gauges distributed over the five archipelagoes composing French Polynesia, a territory as large as Europe. We'll sum up all the observations of the tsunami and the field survey done in Tahiti (Society islands) and Hiva-Oa (Marquesas islands) to evaluate the preliminary tsunami forecast tool (MERIT) and the detailed tsunami forecast tool (COASTER) of the French Polynesian Tsunami Warning Center. The preliminary tool forecasted a maximal tsunami height between 0.5m to 2.3 m all over the Marquesas Islands. But only the island of Hiva-Oa had a tsunami forecast greater than 1 meter especially in the Tahauku Bay well known for its local response due to its resonance properties. In Tahauku bay, the tide gauge located at the entrance of the bay recorded a maximal tsunami height above mean sea level ~ 1.7 m; and we measured at the bottom of the bay a run-up about 2.8 m at 388 m inland from the shoreline in the river bed, and a run-up of 2.5 m located 155 m inland. The multi-grid simulation over Tahiti was done one hour after the origin time of the earthquake and gave a very localized tsunami impact on the North shore. Our forecast indicated an inundation about 10 m inland that lead Civil Authorities to evacuate 6 houses. It was the first operational use of this new fine grid covering the north part of Tahiti that is not protected by a coral reef. So we were attentive to the feed back of the alert that confirm the forecast of the maximal height arrival 1 hour after the first arrival. The tsunami warning system forecast well strong impact as well as low impact as long as we have an early robust description of the seismic parameters and fine grids about 10 m spatial resolution to simulate tsunami impact. In January of 2016 we are able to forecast tsunami heights for 72 points located over 35 islands of French Polynesia.

  13. Communicating likelihoods and probabilities in forecasts of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas

    2014-02-01

    The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in

  14. Forecasting induced seismicity rate and Mmax using calibrated numerical models

    NASA Astrophysics Data System (ADS)

    Dempsey, D.; Suckale, J.

    2016-12-01

    At Groningen, The Netherlands, several decades of induced seismicity from gas extraction has culminated in a M 3.6 event (mid 2012). From a public safety and commercial perspective, it is desirable to anticipate future seismicity outcomes at Groningen. One way to quantify earthquake risk is Probabilistic Seismic Hazard Analysis (PSHA), which requires an estimate of the future seismicity rate and its magnitude frequency distribution (MFD). This approach is effective at quantifying risk from tectonic events because the seismicity rate, once measured, is almost constant over timescales of interest. In contrast, rates of induced seismicity vary significantly over building lifetimes, largely in response to changes in injection or extraction. Thus, the key to extending PSHA to induced earthquakes is to estimate future changes of the seismicity rate in response to some proposed operating schedule. Numerical models can describe the physical link between fluid pressure, effective stress change, and the earthquake process (triggering and propagation). However, models with predictive potential of individual earthquakes face the difficulty of characterizing specific heterogeneity - stress, strength, roughness, etc. - at locations of interest. Modeling catalogs of earthquakes provides a means of averaging over this uncertainty, focusing instead on the collective features of the seismicity, e.g., its rate and MFD. The model we use incorporates fluid pressure and stress changes to describe nucleation and crack-like propagation of earthquakes on stochastically characterized 1D faults. This enables simulation of synthetic catalogs of induced seismicity from which the seismicity rate, location and MFD are extracted. A probability distribution for Mmax - the largest event in some specified time window - is also computed. Because the model captures the physics linking seismicity to changes in the reservoir, earthquake observations and operating information can be used to calibrate a

  15. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  16. AgRISTARS: Foreign commodity production forecasting. Project test reports document, volume 1. [using North Dakota, South Dakota, Montana, and Minnesota

    NASA Technical Reports Server (NTRS)

    Waggoner, J. T.; Phinney, D. E. (Principal Investigator)

    1981-01-01

    Foreign Commodity Production Forecasting testing activities through June 1981 are documented. A log of test reports is presented. Standard documentation sets are included for each test. The documentation elements presented in each set are summarized.

  17. A statistical investigation of z test and ROC curve on seismo-ionospheric anomalies in TEC associated earthquakes in Taiwan during 1999-2014

    NASA Astrophysics Data System (ADS)

    Shih, A. L.; Liu, J. Y. G.

    2015-12-01

    A median-based method and a z test are employed to find characteristics of seismo-ionospheric precursor (SIP) of the total electron content (TEC) in global ionosphere map (GIM) associated with 129 M≥5.5 earthquakes in Taiwan during 1999-2014. Results show that both negative and positive anomalies in the GIM TEC with the statistical significance of the z test appear few days before the earthquakes. The receiver operating characteristic (ROC) curve is further applied to see whether the SIPs exist in Taiwan.

  18. Application of GPS Technologies to study Pre-earthquake processes. A review and future prospects

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Liu, J. Y. G.; Ouzounov, D.; Hernandez-Pajares, M.; Hattori, K.; Krankowski, A.; Zakharenkova, I.; Cherniak, I.

    2016-12-01

    We present the progress reached by the GPS TEC technologies in study of pre-seismic anomalies in the ionosphere appearing few days before the strong earthquakes. Starting from the first case studies such as 17 August 1999 M7.6 Izmit earthquake in Turkey the technology has been developed and converted into the global near real-time monitoring of seismo-ionospheric effects which is used now in the multiparameter nowcast and forecast of the strong earthquakes. Development of the techniques of the seismo-ionospheric anomalies identification was carried out in parallel with the development of the physical mechanism explaining these anomalies generation. It was established that the seismo-ionospheric anomalies have a self-similarity property, are dependent on the local time and are persistent at least for 4 hours, deviation from undisturbed level could be both positive and negative depending on the leading time (in days) to the moment of impending earthquake and from longitude of anomaly in relation to the epicenter longitude. Low latitude and near equatorial earthquakes demonstrate the magnetically conjugated effect, while the middle and high latitude earthquakes demonstrate the single anomaly over the earthquake preparation zone. From the anomalies morphology the physical mechanism was derived within the framework of the more complex Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling concept. In addition to the multifactor analysis of the GPS TEC time series the GIM MAP technology was applied also clearly showing the seismo-ionospheric anomalies locality and their spatial size correspondence to the Dobrovolsky determination of the earthquake preparation zone radius. Application of ionospheric tomography techniques permitted to study not only the total electron content variations but also the modification of the vertical distribution of electron concentration in the ionosphere before earthquakes. The statistical check of the ionospheric precursors passed the

  19. Investigation of the TEC Changes in the vicinity of the Earthquake Preparation Zone

    NASA Astrophysics Data System (ADS)

    Ulukavak, Mustafa; Yalcinkaya, Mualla

    2016-04-01

    Recently, investigation of the anomalies in the ionosphere before the earthquake has taken too much attention. The Total Electron Content (TEC) data has been used to monitor the changes in the ionosphere. Hence, researchers use the TEC changes before the strong earthquakes to monitor the anomalies in the ionosphere. In this study, the GPS-TEC variations, obtained from the GNSS stations in the vicinity of the earthquake preparation zone, was investigated. Nidra earthquake (M6.5), which was occurred on the north-west of Greece on November 17th, 2015 (38.755°N, 20.552°E), was selected for this study. First, the equation proposed by Dobrovolsky et al. (1979) was used to calculate the radius of the earthquake preparation zone. International GNSS Service (IGS) stations in the region were classified with respect to the radius of the earthquake preparation zone. The observation data of each station was obtained from the Crustal Dynamics Data and Information System (CDDIS) archive to estimate GPS-TEC variations between 16 October 2015 and 16 December 2015. Global Ionosphere Maps (GIM) products, obtained from the IGS, was used to check the robustness of the GPS-TEC variations. Possible anomalies were analyzed for each GNSS station by using the 15-day moving median method. In order to analyze these pre-earthquake ionospheric anomalies, we investigated three indices (Kp, F10.7 and Dst) related to the space weather conditions between 16 October 2015 and 16 December 2015. Solar and geomagnetic indices were obtained from The Oceanic and Atmospheric Administration (NOAA), The Canadian Space Weather Forecast Centre (CSWFC), and the Data Analysis Center for Geomagnetism and Space Magnetism Graduate School of Science, Kyoto University (WDC). This study aims at investigating the possible effects of the earthquake on the TEC variations.

  20. Quantifying Earthquake Collapse Risk of Tall Steel Braced Frame Buildings Using Rupture-to-Rafters Simulations

    NASA Astrophysics Data System (ADS)

    Mourhatch, Ramses

    This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis. As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California. Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2s-2.0s) empirical Green's function synthetics on top of long-period (> 2.0s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with

  1. Middle school students' earthquake content and preparedness knowledge - A mixed method study

    NASA Astrophysics Data System (ADS)

    Henson, Harvey, Jr.

    The purpose of this study was to assess the effect of earthquake instruction on students' earthquake content and preparedness for earthquakes. This study used an innovative direct instruction on earthquake science content and concepts with an inquiry-based group activity on earthquake safety followed by an earthquake simulation and preparedness video to help middle school students understand and prepare for the regional seismic threat. A convenience sample of 384 sixth and seventh grade students at two small middle schools in southern Illinois was used in this study. Qualitative information was gathered using open-ended survey questions, classroom observations, and semi-structured interviews. Quantitative data were collected using a 21 item content questionnaire administered to test students' General Earthquake Knowledge, Local Earthquake Knowledge, and Earthquake Preparedness Knowledge before and after instruction. A pre-test and post-test survey Likert scale with 21 items was used to collect students' perceptions and attitudes. Qualitative data analysis included quantification of student responses to the open-ended questions and thematic analysis of observation notes and interview transcripts. Quantitative datasets were analyzed using descriptive and inferential statistical methods, including t tests to evaluate the differences in means scores between paired groups before and after interventions and one-way analysis of variance (ANOVA) to test for differences between mean scores of the comparison groups. Significant mean differences between groups were further examined using a Dunnett's C post hoc statistical analysis. Integration and interpretation of the qualitative and quantitative results of the study revealed a significant increase in general, local and preparedness earthquake knowledge among middle school students after the interventions. The findings specifically indicated that these students felt most aware and prepared for an earthquake after an

  2. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  3. Possible seasonality in large deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Shearer, Peter M.

    2015-09-01

    Large deep-focus earthquakes (magnitude > 7.0, depth > 500 km) have exhibited strong seasonality in their occurrence times since the beginning of global earthquake catalogs. Of 60 such events from 1900 to the present, 42 have occurred in the middle half of each year. The seasonality appears strongest in the northwest Pacific subduction zones and weakest in the Tonga region. Taken at face value, the surplus of northern hemisphere summer events is statistically significant, but due to the ex post facto hypothesis testing, the absence of seasonality in smaller deep earthquakes, and the lack of a known physical triggering mechanism, we cannot rule out that the observed seasonality is just random chance. However, we can make a testable prediction of seasonality in future large deep-focus earthquakes, which, given likely earthquake occurrence rates, should be verified or falsified within a few decades. If confirmed, deep earthquake seasonality would challenge our current understanding of deep earthquakes.

  4. Testing stress shadowing effects at the South American subduction zone

    NASA Astrophysics Data System (ADS)

    Roth, F.; Dahm, T.; Hainzl, S.

    2017-11-01

    The seismic gap hypothesis assumes that a characteristic earthquake is followed by a long period with a reduced occurrence probability for the next large event on the same fault segment, as a consequence of the induced stress shadow. The gap model is commonly accepted by geologists and is often used for time-dependent seismic hazard estimations. However, systematic and rigorous tests to verify the seismic gap model have often failed so far, which might be partially related to limited data and too tight model assumptions. In this study, we relax the assumption of a characteristic size and location of repeating earthquakes and analyse one of the best available data sets, namely the historical record of major earthquakes along a 3000 km long linear segment of the South American subduction zone. To test whether a stress shadow effect is observable, we compiled a comprehensive catalogue of mega-thrust earthquakes along this plate boundary from 1520 to 2015 containing 174 earthquakes with Mw > 6.5. In our new testing approach, we analyse the time span between an earthquake and the last event that ruptured the epicentre location, where we consider the impact of the uncertainties of epicentres and rupture extensions. Assuming uniform boundary conditions along the trench, we compare the distribution of these recurrence times with simple recurrence models. We find that the distribution is in all cases almost exponentially distributed corresponding to a random (Poissonian) process; despite some tendencies for clustering of the Mw ≥ 7 events and a weak quasi-periodicity of the Mw ≥ 8 earthquakes, respectively. To verify whether the absence of a clear stress shadow signal is related to physical assumptions or data uncertainties, we perform simulations of a physics-based stochastic earthquake model considering rate and state-dependent earthquake nucleation, which are adapted to the observations with regard to the number of events, spatial extend, size distribution and

  5. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  6. The Cape Mendocino, California, earthquakes of April 1992: Subduction at the triple junction

    USGS Publications Warehouse

    Oppenheimer, D.; Beroza, G.; Carver, G.; Dengler, L.; Eaton, J.; Gee, L.; Gonzalez, F.; Jayko, A.; Li, W.H.; Lisowski, M.; Magee, M.; Marshall, G.; Murray, M.; McPherson, R.; Romanowicz, B.; Satake, K.; Simpson, R.; Somerville, P.; Stein, R.; Valentine, D.

    1993-01-01

    The 25 April 1992 magnitude 7.1 Cape Mendocino thrust earthquake demonstrated that the North America—Gorda plate boundary is seismogenic and illustrated hazards that could result from much larger earthquakes forecast for the Cascadia region. The shock occurred just north of the Mendocino Triple Junction and caused strong ground motion and moderate damage in the immediate area. Rupture initiated onshore at a depth of 10.5 kilometers and propagated up-dip and seaward. Slip on steep faults in the Gorda plate generated two magnitude 6.6 aftershocks on 26 April. The main shock did not produce surface rupture on land but caused coastal uplift and a tsunami. The emerging picture of seismicity and faulting at the triple junction suggests that the region is likely to continue experiencing significant seismicity.

  7. Simulation of the Burridge-Knopoff model of earthquakes with variable range stress transfer.

    PubMed

    Xia, Junchao; Gould, Harvey; Klein, W; Rundle, J B

    2005-12-09

    Simple models of earthquake faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton models have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff model, which includes more realistic friction forces and inertia. We find that the latter model with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton models and the usual Burridge-Knopoff model with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of earthquakes and other driven dissipative systems.

  8. Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Dong, Peiyu; Hu, Caibo; Shi, Yaolin

    2015-04-01

    The India-Eurasia's collision produces N-S compression and results in large thrust fault in the southern edge of the Tibetan Plateau. Differential eastern flow of the lower crust of the plateau leads to large strike-slip faults and normal faults within the plateau. From 1904 to 2014, more than 30 earthquakes of Mw > 6.5 occurred sequentially in this distinctive tectonic environment. How did the stresses evolve during the last 110 years, how did the earthquakes interact with each other? Can this knowledge help us to forecast the future seismic hazards? In this essay, we tried to simulate the evolution of the stress field and the earthquake sequence in the Tibetan plateau within the last 110 years with a 2-D finite element model. Given an initial state of stress, the boundary condition was constrained by the present-day GPS observation, which was assumed as a constant rate during the 110 years. We calculated stress evolution year by year, and earthquake would occur if stress exceed the crustal strength. Stress changes due to each large earthquake in the sequence was calculated and contributed to the stress evolution. A key issue is the choice of initial stress state of the modeling, which is actually unknown. Usually, in the study of earthquake triggering, people assume the initial stress is zero, and only calculate the stress changes by large earthquakes - the Coulomb failure stress changes (Δ CFS). To some extent, this simplified method is a powerful tool because it can reveal which fault or which part of a fault becomes more risky or safer relatively. Nonetheless, it has not utilized all information available to us. The earthquake sequence reveals, though far from complete, some information about the stress state in the region. If the entire region is close to a self-organized critical or subcritical state, earthquake stress drop provides an estimate of lower limit of initial state. For locations no earthquakes occurred during the period, initial stress has to be

  9. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  10. Tsunami early warning in the Mediterranean: role, structure and tricks of pre-computed tsunami simulation databases and matching/forecasting algorithms

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano

    2014-05-01

    The general idea that pre-computed simulated scenario databases can play a key role in conceiving tsunami early warning systems is commonly accepted by now. But it was only in the last decade that it started to be applied to the Mediterranean region, taking special impulse from initiatives like the GDACS and from recently concluded EU-funded projects such as TRIDEC and NearToWarn. With reference to these two projects and with the possibility of further developing this research line in the frame of the FP7 ASTARTE project, we discuss some results we obtained regarding two major topics, namely the strategies applicable to the tsunami scenario database building and the design and performance assessment of a timely and "reliable" elementary-scenario combination algorithm to be run in real-time. As for the first theme, we take advantage of the experience gained in the test areas of Western Iberia, Rhodes (Greece) and Cyprus to illustrate the criteria with which a "Matching Scenario Database" (MSDB) can be built. These involve 1) the choice of the main tectonic tsunamigenic sources (or areas), 2) their tessellation with matrices of elementary faults whose dimension heavily depend on the particular studied area and must be a compromise between the needs to represent the tsunamigenic area in sufficient detail and of limiting the number of scenarios to be simulated, 3) the computation of the scenarios themselves, 4) the choice of the relevant simulation outputs and the standardisation of their formats. Regarding the matching/forecast algorithm, we want it to select and combine the MSDB elements based on the initial earthquake magnitude and location estimate, and to produce a forecast of (at least) the tsunami arrival time, amplitude and period at the closest tide-level sensors and in all needed forecast points. We discuss the performance of the algorithm in terms of the time needed to produce the forecast after the earthquake is detected. In particular, we analyse the

  11. Volcano-earthquake interaction at Mauna Loa volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, Thomas R.; Amelung, Falk

    2006-05-01

    The activity at Mauna Loa volcano, Hawaii, is characterized by eruptive fissures that propagate into the Southwest Rift Zone (SWRZ) or into the Northeast Rift Zone (NERZ) and by large earthquakes at the basal decollement fault. In this paper we examine the historic eruption and earthquake catalogues, and we test the hypothesis that the events are interconnected in time and space. Earthquakes in the Kaoiki area occur in sequence with eruptions from the NERZ, and earthquakes in the Kona and Hilea areas occur in sequence with eruptions from the SWRZ. Using three-dimensional numerical models, we demonstrate that elastic stress transfer can explain the observed volcano-earthquake interaction. We examine stress changes due to typical intrusions and earthquakes. We find that intrusions change the Coulomb failure stress along the decollement fault so that NERZ intrusions encourage Kaoiki earthquakes and SWRZ intrusions encourage Kona and Hilea earthquakes. On the other hand, earthquakes decompress the magma chamber and unclamp part of the Mauna Loa rift zone, i.e., Kaoiki earthquakes encourage NERZ intrusions, whereas Kona and Hilea earthquakes encourage SWRZ intrusions. We discuss how changes of the static stress field affect the occurrence of earthquakes as well as the occurrence, location, and volume of dikes and of associated eruptions and also the lava composition and fumarolic activity.

  12. Weather Forecasting Systems and Methods

    NASA Technical Reports Server (NTRS)

    Mecikalski, John (Inventor); MacKenzie, Wayne M., Jr. (Inventor); Walker, John Robert (Inventor)

    2014-01-01

    A weather forecasting system has weather forecasting logic that receives raw image data from a satellite. The raw image data has values indicative of light and radiance data from the Earth as measured by the satellite, and the weather forecasting logic processes such data to identify cumulus clouds within the satellite images. For each identified cumulus cloud, the weather forecasting logic applies interest field tests to determine a score indicating the likelihood of the cumulus cloud forming precipitation and/or lightning in the future within a certain time period. Based on such scores, the weather forecasting logic predicts in which geographic regions the identified cumulus clouds will produce precipitation and/or lighting within during the time period. Such predictions may then be used to provide a weather map thereby providing users with a graphical illustration of the areas predicted to be affected by precipitation within the time period.

  13. Modeled Forecasts of Dengue Fever in San Juan, Puerto Rico Using NASA Satellite Enhanced Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Morin, C.; Quattrochi, D. A.; Zavodsky, B.; Case, J.

    2015-12-01

    Dengue fever (DF) is an important mosquito transmitted disease that is strongly influenced by meteorological and environmental conditions. Recent research has focused on forecasting DF case numbers based on meteorological data. However, these forecasting tools have generally relied on empirical models that require long DF time series to train. Additionally, their accuracy has been tested retrospectively, using past meteorological data. Consequently, the operational utility of the forecasts are still in question because the error associated with weather and climate forecasts are not reflected in the results. Using up-to-date weekly dengue case numbers for model parameterization and weather forecast data as meteorological input, we produced weekly forecasts of DF cases in San Juan, Puerto Rico. Each week, the past weeks' case counts were used to re-parameterize a process-based DF model driven with updated weather forecast data to generate forecasts of DF case numbers. Real-time weather forecast data was produced using the Weather Research and Forecasting (WRF) numerical weather prediction (NWP) system enhanced using additional high-resolution NASA satellite data. This methodology was conducted in a weekly iterative process with each DF forecast being evaluated using county-level DF cases reported by the Puerto Rico Department of Health. The one week DF forecasts were accurate especially considering the two sources of model error. First, weather forecasts were sometimes inaccurate and generally produced lower than observed temperatures. Second, the DF model was often overly influenced by the previous weeks DF case numbers, though this phenomenon could be lessened by increasing the number of simulations included in the forecast. Although these results are promising, we would like to develop a methodology to produce longer range forecasts so that public health workers can better prepare for dengue epidemics.

  14. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  15. A 30-year history of earthquake crisis communication in California and lessons for the future

    NASA Astrophysics Data System (ADS)

    Jones, L.

    2015-12-01

    The first statement from the US Geological Survey to the California Office of Emergency Services quantifying the probability of a possible future earthquake was made in October 1985 about the probability (approximately 5%) that a M4.7 earthquake located directly beneath the Coronado Bay Bridge in San Diego would be a foreshock to a larger earthquake. In the next 30 years, publication of aftershock advisories have become routine and formal statements about the probability of a larger event have been developed in collaboration with the California Earthquake Prediction Evaluation Council (CEPEC) and sent to CalOES more than a dozen times. Most of these were subsequently released to the public. These communications have spanned a variety of approaches, with and without quantification of the probabilities, and using different ways to express the spatial extent and the magnitude distribution of possible future events. The USGS is re-examining its approach to aftershock probability statements and to operational earthquake forecasting with the goal of creating pre-vetted automated statements that can be released quickly after significant earthquakes. All of the previous formal advisories were written during the earthquake crisis. The time to create and release a statement became shorter with experience from the first public advisory (to the 1988 Lake Elsman earthquake) that was released 18 hours after the triggering event, but was never completed in less than 2 hours. As was done for the Parkfield experiment, the process will be reviewed by CEPEC and NEPEC (National Earthquake Prediction Evaluation Council) so the statements can be sent to the public automatically. This talk will review the advisories, the variations in wording and the public response and compare this with social science research about successful crisis communication, to create recommendations for future advisories

  16. Adaptive time-variant models for fuzzy-time-series forecasting.

    PubMed

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  17. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    USGS Publications Warehouse

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  18. Crowd-Sourced Global Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  19. If pandas scream. an earthquake is coming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magida, P.

    Feature article:Use of the behavior of animals to predict weather has spanned several ages and dozens of countries. While animals may behave in diverse ways to indicate weather changes, they all tend to behave in more or less the same way before earthquakes. The geophysical community in the U.S. has begun testing animal behavior before earthquakes. It has been determined that animals have the potential of acting as accurate geosensors to detect earthquakes before they occur. (5 drawings)

  20. Reflections from the interface between seismological research and earthquake risk reduction

    NASA Astrophysics Data System (ADS)

    Sargeant, S.

    2012-04-01

    Scientific understanding of earthquakes and their attendant hazards is vital for the development of effective earthquake risk reduction strategies. Within the global disaster reduction policy framework (the Hyogo Framework for Action, overseen by the UN International Strategy for Disaster Reduction), the anticipated role of science and scientists is clear, with respect to risk assessment, loss estimation, space-based observation, early warning and forecasting. The importance of information sharing and cooperation, cross-disciplinary networks and developing technical and institutional capacity for effective disaster management is also highlighted. In practice, the degree to which seismological information is successfully delivered to and applied by individuals, groups or organisations working to manage or reduce the risk from earthquakes is variable. The challenge for scientists is to provide fit-for-purpose information that can be integrated simply into decision-making and risk reduction activities at all levels of governance and at different geographic scales, often by a non-technical audience (i.e. people without any seismological/earthquake engineering training). The interface between seismological research and earthquake risk reduction (defined here in terms of both the relationship between the science and its application, and the scientist and other risk stakeholders) is complex. This complexity is a function of a range issues that arise relating to communication, multidisciplinary working, politics, organisational practices, inter-organisational collaboration, working practices, sectoral cultures, individual and organisational values, worldviews and expectations. These factors can present significant obstacles to scientific information being incorporated into the decision-making process. The purpose of this paper is to present some personal reflections on the nature of the interface between the worlds of seismological research and risk reduction, and the

  1. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictable earthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

  2. The Earthquake Early Warning System In Southern Italy: Performance Tests And Next Developments

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Elia, L.; Martino, C.; Colombelli, S.; Emolo, A.; Festa, G.; Iannaccone, G.

    2011-12-01

    PRESTo (PRobabilistic and Evolutionary early warning SysTem) is the software platform for Earthquake Early Warning (EEW) in Southern Italy, that integrates recent algorithms for real-time earthquake location, magnitude estimation and damage assessment, into a highly configurable and easily portable package. The system is under active experimentation based on the Irpinia Seismic Network (ISNet). PRESTo processes the live streams of 3C acceleration data for P-wave arrival detection and, while an event is occurring, promptly performs event detection and provides location, magnitude estimations and peak ground shaking predictions at target sites. The earthquake location is obtained by an evolutionary, real-time probabilistic approach based on an equal differential time formulation. At each time step, it uses information from both triggered and not-yet-triggered stations. Magnitude estimation exploits an empirical relationship that correlates it to the filtered Peak Displacement (Pd), measured over the first 2-4 s of P-signal. Peak ground-motion parameters at any distance can be finally estimated by ground motion prediction equations. Alarm messages containing the updated estimates of these parameters can thus reach target sites before the destructive waves, enabling automatic safety procedures. Using the real-time data streaming from the ISNet network, PRESTo has produced a bulletin for about a hundred low-magnitude events occurred during last two years. Meanwhile, the performances of the EEW system were assessed off-line playing-back the records for moderate and large events from Italy, Spain and Japan and synthetic waveforms for large historical events in Italy. These tests have shown that, when a dense seismic network is deployed in the fault area, PRESTo produces reliable estimates of earthquake location and size within 5-6 s from the event origin time (To). Estimates are provided as probability density functions whose uncertainty typically decreases with time

  3. Drought Monitoring and Forecasting Using the Princeton/U Washington National Hydrologic Forecasting System

    NASA Astrophysics Data System (ADS)

    Wood, E. F.; Yuan, X.; Roundy, J. K.; Lettenmaier, D. P.; Mo, K. C.; Xia, Y.; Ek, M. B.

    2011-12-01

    Extreme hydrologic events in the form of droughts or floods are a significant source of social and economic damage in many parts of the world. Having sufficient warning of extreme events allows managers to prepare for and reduce the severity of their impacts. A hydrologic forecast system can give seasonal predictions that can be used by mangers to make better decisions; however there is still much uncertainty associated with such a system. Therefore it is important to understand the forecast skill of the system before transitioning to operational usage. Seasonal reforecasts (1982 - 2010) from the NCEP Climate Forecast System (both version 1 (CFS) and version 2 (CFSv2), Climate Prediction Center (CPC) outlooks and the European Seasonal Interannual Prediction (EUROSIP) system, are assessed for forecasting skill in drought prediction across the U.S., both singularly and as a multi-model system The Princeton/U Washington national hydrologic monitoring and forecast system is being implemented at NCEP/EMC via their Climate Test Bed as the experimental hydrological forecast system to support U.S. operational drought prediction. Using our system, the seasonal forecasts are biased corrected, downscaled and used to drive the Variable Infiltration Capacity (VIC) land surface model to give seasonal forecasts of hydrologic variables with lead times of up to six months. Results are presented for a number of events, with particular focus on the Apalachicola-Chattahoochee-Flint (ACF) River Basin in the South Eastern United States, which has experienced a number of severe droughts in recent years and is a pilot study basin for the National Integrated Drought Information System (NIDIS). The performance of the VIC land surface model is evaluated using observational forcing when compared to observed streamflow. The effectiveness of the forecast system to predict streamflow and soil moisture is evaluated when compared with observed streamflow and modeled soil moisture driven by

  4. Low-frequency earthquakes in Shikoku, Japan, and their relationship to episodic tremor and slip.

    PubMed

    Shelly, David R; Beroza, Gregory C; Ide, Satoshi; Nakamula, Sho

    2006-07-13

    Non-volcanic seismic tremor was discovered in the Nankai trough subduction zone in southwest Japan and subsequently identified in the Cascadia subduction zone. In both locations, tremor is observed to coincide temporally with large, slow slip events on the plate interface downdip of the seismogenic zone. The relationship between tremor and aseismic slip remains uncertain, however, largely owing to difficulty in constraining the source depth of tremor. In southwest Japan, a high quality borehole seismic network allows identification of coherent S-wave (and sometimes P-wave) arrivals within the tremor, whose sources are classified as low-frequency earthquakes. As low-frequency earthquakes comprise at least a portion of tremor, understanding their mechanism is critical to understanding tremor as a whole. Here, we provide strong evidence that these earthquakes occur on the plate interface, coincident with the inferred zone of slow slip. The locations and characteristics of these events suggest that they are generated by shear slip during otherwise aseismic transients, rather than by fluid flow. High pore-fluid pressure in the immediate vicinity, as implied by our estimates of seismic P- and S-wave speeds, may act to promote this transient mode of failure. Low-frequency earthquakes could potentially contribute to seismic hazard forecasting by providing a new means to monitor slow slip at depth.

  5. Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey

    USGS Publications Warehouse

    Parsons, T.

    2004-01-01

    New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.

  6. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    USGS Publications Warehouse

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  7. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north

  8. Geological evidence for Holocene earthquakes and tsunamis along the Nankai-Suruga Trough, Japan

    NASA Astrophysics Data System (ADS)

    Garrett, Ed; Fujiwara, Osamu; Garrett, Philip; Heyvaert, Vanessa M. A.; Shishikura, Masanobu; Yokoyama, Yusuke; Hubert-Ferrari, Aurélia; Brückner, Helmut; Nakamura, Atsunori; De Batist, Marc

    2016-04-01

    The Nankai-Suruga Trough, lying immediately south of Japan's densely populated and highly industrialised southern coastline, generates devastating great earthquakes (magnitude > 8). Intense shaking, crustal deformation and tsunami generation accompany these ruptures. Forecasting the hazards associated with future earthquakes along this >700 km long fault requires a comprehensive understanding of past fault behaviour. While the region benefits from a long and detailed historical record, palaeoseismology has the potential to provide a longer-term perspective and additional insights. Here, we summarise the current state of knowledge regarding geological evidence for past earthquakes and tsunamis, incorporating literature originally published in both Japanese and English. This evidence comes from a wide variety of sources, including uplifted marine terraces and biota, marine and lacustrine turbidites, liquefaction features, subsided marshes and tsunami deposits in coastal lakes and lowlands. We enhance available results with new age modelling approaches. While publications describe proposed evidence from > 70 sites, only a limited number provide compelling, well-dated evidence. The best available records allow us to map the most likely rupture zones of eleven earthquakes occurring during the historical period. Our spatiotemporal compilation suggests the AD 1707 earthquake ruptured almost the full length of the subduction zone and that earthquakes in AD 1361 and 684 were predecessors of similar magnitude. Intervening earthquakes were of lesser magnitude, highlighting variability in rupture mode. Recurrence intervals for ruptures of the a single seismic segment range from less than 100 to more than 450 years during the historical period. Over longer timescales, palaeoseismic evidence suggests intervals ranging from 100 to 700 years. However, these figures reflect thresholds of evidence creation and preservation as well as genuine recurrence intervals. At present, we have

  9. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  10. Using regional pore-fluid pressure response following the 3 Sep 2016 M­­w5.8 Pawnee, Oklahoma earthquake to constrain far-field seismicity rate forecasts

    NASA Astrophysics Data System (ADS)

    Kroll, K.; Murray, K. E.; Cochran, E. S.

    2016-12-01

    The 3 Sep 2016 M­­w5.8 Pawnee, Oklahoma earthquake was the largest event to occur in recorded history of the state. Widespread shaking from the event was felt in seven central U.S. states and caused damage as far away as Oklahoma City ( 115 km SSW). The Pawnee earthquake occurred soon after the deployment of a subsurface pore-fluid pressure monitoring network in Aug 2016. Eight pressure transducers were installed downhole in inactive saltwater disposal wells that were completed in the basal sedimentary zone (the Arbuckle Group). The transducers are located in Alfalfa, Grant, and Payne Counties at distances of 48 to 140 km from the Pawnee earthquake. We observed coseismic fluid pressure changes in all monitoring wells, indicating a large-scale poroelastic response in the Arbuckle. Two wells in Payne County lie in a zone of volumetric compression 48-52 km SSE of the rupture and experienced a co-seismic rise in fluid pressures that we conclude was related to poroelastic rebound of the Arbuckle reservoir. We compare measurements of the pore-fluid pressure change to estimated values given by the product of the volumetric strain, a Skempton's coefficient of 0.33, and a Bulk modulus of 25 GPa for fractured granitic basement rocks. We explore the possibility that the small increase in pore-fluid pressure may increase the rate of seismicity in regions outside of the mainshock region. We test this hypothesis by supplementing the Oklahoma Geological Survey earthquake catalog by semi-automated detection smaller magnitude (<2.6 M) earthquakes on seismic stations that are located in the vicinity of the wells. Using the events that occur in the week before the mainshock (27 Aug to 3 Sep 2016) as the background seismicity rate and the estimated pore-fluid pressure increase, we use a rate-state model to predict the seismicity rate change in the week following the event. We then compare the model predictions to the observed seismicity in the week following the Pawnee earthquake

  11. Detection of Traveling Ionospheric Disturbances Induced by 2010 Mindanao Earthquakes

    NASA Astrophysics Data System (ADS)

    Shahbazi, A.; Park, J.; Huang, C.

    2017-12-01

    Earthquakes precipitate anomalous variations in the concentration of free electrons/ions in the ionosphere being known as the Traveling Ionospheric Disturbance (TID). The TIDs can be detected from the Total Electron Content (TEC), which can be extracted from the ionospheric delay along the ray path of the GNSS signal between a satellite and a receiver. In this study, we utilized the GNSS-derived TEC observed by Communication/Navigation Outage Forecasting System (C/NOFS), which is a Low Earth Orbit (LEO) satellite. As a case study, we detected the ionospheric perturbations triggered by 2010 Mindanao earthquakes in the Moro Gulf, southern Philippines. Since this sequence of the earthquakes was occurred in depths of about 600 km, the low detectability of TID signature was expected while the magnitude of the foreshock, primary shock and aftershock were of 7.3, 7.6, and 7.5 Mb, respectively. Hence, we introduced a novel filtering scheme to assess the performance of space-based TEC observations in identification of earthquake-induced TIDs as well as to cope with the challenge of investigating deep subsequent earthquakes. The proposed approach suppresses the dominant trend of TEC by Hodrick-Prescott (H-P) Filter, which identifies the extremums of the remained signal as the potential TIDs and associates them to the seismic waves. Considering the propagation mechanism of the seismic waves given in the literatures that the wave propagates upward from the earthquake epicenter to the upper atmosphere, and then, moves horizontally through the ionosphere, we applied the first order linear regression model to estimate the propagation velocity of TIDs. Our experimental result demonstrated the vertical propagation velocity of 0.980 km/s and the horizontal propagation velocity through the ionosphere of 1.066 km/s with the std. of 0.364 km/s. The correlation coefficient of the detected TIDs in this model is 0.78 that illustrates the detected TIDs are well correlated with the event

  12. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto

  13. Multi-parameter Observations and Validation of Pre-earthquake Atmospheric Signals

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Mogi, T.; Kafatos, M.

    2014-12-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. We are exploring the potential of pre-seismic atmospheric and ionospheric signals to alert for large earthquakes. To achieve this, we start validating anomalous ionospheric /atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (OLR), electron concentration in the ionosphere (GPS/TEC), VHF-bands radio waves, radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show OLR anomalous behavior before all of these events with no false negatives. False alarm ratio for false positives is less then 25%. (2) Prospective testing using multiple parameters with potential for M5.5+ events. The initial testing shows systematic appearance of atmospheric anomalies in advance (days) to the M5.5+ events for Taiwan and Japan (Honshu and Hokkaido areas). Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be further studied and tested for advancing the multi-sensors detection of pre-earthquake atmospheric signals.

  14. Probabilistic Forecasting of Life and Economic Losses due to Natural Disasters

    NASA Astrophysics Data System (ADS)

    Barton, C. C.; Tebbens, S. F.

    2014-12-01

    The magnitude of natural hazard events such as hurricanes, tornadoes, earthquakes, and floods are traditionally measured by wind speed, energy release, or discharge. In this study we investigate the scaling of the magnitude of individual events of the 20th and 21stcentury in terms of economic and life losses in the United States and worldwide. Economic losses are subdivided into insured and total losses. Some data sets are inflation or population adjusted. Forecasts associated with these events are of interest to insurance, reinsurance, and emergency management agencies. Plots of cumulative size-frequency distributions of economic and life loss are well-fit by power functions and thus exhibit self-similar scaling. This self-similar scaling property permits use of frequent small events to estimate the rate of occurrence of less frequent larger events. Examining the power scaling behavior of loss data for disasters permits: forecasting the probability of occurrence of a disaster over a wide range of years (1 to 10 to 1,000 years); comparing losses associated with one type of disaster to another; comparing disasters in one region to similar disasters in another region; and, measuring the effectiveness of planning and mitigation strategies. In the United States, life losses due to flood and tornado cumulative-frequency distributions have steeper slopes, indicating that frequent smaller events contribute the majority of losses. In contrast, life losses due to hurricanes and earthquakes have shallower slopes, indicating that the few larger events contribute the majority of losses. Disaster planning and mitigation strategies should incorporate these differences.

  15. Application and analysis of debris-flow early warning system in Wenchuan earthquake-affected area

    NASA Astrophysics Data System (ADS)

    Liu, D. L.; Zhang, S. J.; Yang, H. J.; Zhao, L. Q.; Jiang, Y. H.; Tang, D.; Leng, X. P.

    2016-02-01

    The activities of debris flow (DF) in the Wenchuan earthquake-affected area significantly increased after the earthquake on 12 May 2008. The safety of the lives and property of local people is threatened by DFs. A physics-based early warning system (EWS) for DF forecasting was developed and applied in this earthquake area. This paper introduces an application of the system in the Wenchuan earthquake-affected area and analyzes the prediction results via a comparison to the DF events triggered by the strong rainfall events reported by the local government. The prediction accuracy and efficiency was first compared with a contribution-factor-based system currently used by the weather bureau of Sichuan province. The storm on 17 August 2012 was used as a case study for this comparison. The comparison shows that the false negative rate and false positive rate of the new system is, respectively, 19 and 21 % lower than the system based on the contribution factors. Consequently, the prediction accuracy is obviously higher than the system based on the contribution factors with a higher operational efficiency. On the invitation of the weather bureau of Sichuan province, the authors upgraded their prediction system of DF by using this new system before the monsoon of Wenchuan earthquake-affected area in 2013. Two prediction cases on 9 July 2013 and 10 July 2014 were chosen to further demonstrate that the new EWS has high stability, efficiency, and prediction accuracy.

  16. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in

  17. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  18. Prioritizing earthquake and tsunami alerting efforts

    NASA Astrophysics Data System (ADS)

    Allen, R. M.; Allen, S.; Aranha, M. A.; Chung, A. I.; Hellweg, M.; Henson, I. H.; Melgar, D.; Neuhauser, D. S.; Nof, R. N.; Strauss, J. A.

    2015-12-01

    The timeline of hazards associated with earthquakes ranges from seconds for the strong shaking at the epicenter, to minutes for strong shaking at more distant locations in big quakes, to tens of minutes for a local tsunami. Earthquake and tsunami warning systems must therefore include very fast initial alerts, while also taking advantage of available time in bigger and tsunami-generating quakes. At the UC Berkeley Seismological Laboratory we are developing a suite of algorithms to provide the fullest possible information about earthquake shaking and tsunami inundation from seconds to minutes after a quake. The E-larmS algorithm uses the P-wave to rapidly detect an earthquake and issue a warning. It is currently issuing alerts to test users in as little as 3 sec after the origin time. Development of a new waveform detector may lead to even faster alerts. G-larmS uses permanent deformation estimates from GNSS stations to estimate the geometry and extent of rupture underway providing more accurate ground shaking estimates in big (M>~7) earthquakes. It performed well in the M6.0 2014 Napa earthquake. T-larmS is a new algorithm designed to extend alert capabilities to tsunami inundation. Rapid estimates of source characteristics for subduction zones event can not only be used to warn of the shaking hazard, but also the local tsunami inundation hazard. These algorithms are being developed, implemented and tested with a focus on the western US, but are also now being tested in other parts of the world including Israel, Turkey, Korea and Chile. Beta users in the Bay Area are receiving the alerts and beginning to implement automated actions. They also provide feedback on users needs, which has led to the development of the MyEEW smartphone app. This app allows beta users to receive the alerts on their cell phones. All these efforts feed into our ongoing assessment of directions and priorities for future development and implementation efforts.

  19. On near-source earthquake triggering

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2009-01-01

    When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.

  20. Seismic Forecasting of Eruptions at Dormant StratoVolcanoes

    NASA Astrophysics Data System (ADS)

    White, R. A.

    2015-12-01

    Seismic monitoring data provide important constraints on tracking magmatic ascent and eruption. Based on direct experience with over 25 and review of over 10 additional eruption sequences at 24 volcanoes, we have identified 4 phases of precursory seismicity. 1) Deep (>20 km) low frequency (DLF) earthquakes occur near the base of the crust as magma rises toward crustal reservoirs. This seismicity is the most difficult to observe, owing to generally small magnitudes (M<2.5) the significant depth. 2) Distal volcano-tectonic (DVT) earthquakes occur on tectonic faults from a 2 to 30+ km distance laterally from (not beneath) the eventual eruption site as magma intrudes into and rises out of upper crustal reservoirs to depths of 2-3 km. A survey of 111 eruptions of 83 previously dormant volcanoes, (including all eruptions of VEI >4 since 1955) shows they were all preceded by significant DVT seismicity, usually felt. This DVT seismicity is easily observed owing to magnitudes generally reaching M>3.5. The cumulative DVT energy correlates to the intruding magma volume. 3) Low frequency (LF) earthquakes, LF tremor and contained explosions occur as magma interacts with the shallow hydrothermal system (<2 km depth), while the distal seismicity dies off.4) Shortly after this, repetitive self-similar proximal seismicity may occur and may dominate the seismic records as magma rises to the surface. We present some examples of this seismic progression to demonstrate that data from a single short-period vertical station are often sufficient to forecast eruption onsets.

  1. The 2011 M = 9.0 Tohoku oki earthquake more than doubled the probability of large shocks beneath Tokyo

    USGS Publications Warehouse

    Toda, Shinji; Stein, Ross S.

    2013-01-01

    1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 earthquakes in the past 400 years. Immediately after the Tohoku earthquake, the seismicity rate in the corridor jumped 10-fold, while the rate of normal focal mechanisms dropped in half. The seismicity rate decayed for 6–12 months, after which it steadied at three times the pre-Tohoku rate. The seismicity rate jump and decay to a new rate, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a rate/state Coulomb model, which we use to forecast the time-dependent probability of large earthquakes in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku earthquake not struck

  2. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  3. Development and validation of a regional coupled forecasting system for S2S forecasts

    NASA Astrophysics Data System (ADS)

    Sun, R.; Subramanian, A. C.; Hoteit, I.; Miller, A. J.; Ralph, M.; Cornuelle, B. D.

    2017-12-01

    Accurate and efficient forecasting of oceanic and atmospheric circulation is essential for a wide variety of high-impact societal needs, including: weather extremes; environmental protection and coastal management; management of fisheries, marine conservation; water resources; and renewable energy. Effective forecasting relies on high model fidelity and accurate initialization of the models with observed state of the ocean-atmosphere-land coupled system. A regional coupled ocean-atmosphere model with the Weather Research and Forecasting (WRF) model and the MITGCM ocean model coupled using the ESMF (Earth System Modeling Framework) coupling framework is developed to resolve mesoscale air-sea feedbacks. The regional coupled model allows oceanic mixed layer heat and momentum to interact with the atmospheric boundary layer dynamics at the mesoscale and submesoscale spatiotemporal regimes, thus leading to feedbacks which are otherwise not resolved in coarse resolution global coupled forecasting systems or regional uncoupled forecasting systems. The model is tested in two scenarios in the mesoscale eddy rich Red Sea and Western Indian Ocean region as well as mesoscale eddies and fronts of the California Current System. Recent studies show evidence for air-sea interactions involving the oceanic mesoscale in these two regions which can enhance predictability on sub seasonal timescale. We will present results from this newly developed regional coupled ocean-atmosphere model for forecasts over the Red Sea region as well as the California Current region. The forecasts will be validated against insitu observations in the region as well as reanalysis fields.

  4. Automation of energy demand forecasting

    NASA Astrophysics Data System (ADS)

    Siddique, Sanzad

    Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.

  5. Improving the RST Approach for Earthquake Prone Areas Monitoring: Results of Correlation Analysis among Significant Sequences of TIR Anomalies and Earthquakes (M>4) occurred in Italy during 2004-2014

    NASA Astrophysics Data System (ADS)

    Tramutoli, V.; Coviello, I.; Filizzola, C.; Genzano, N.; Lisi, M.; Paciello, R.; Pergola, N.

    2015-12-01

    Looking toward the assessment of a multi-parametric system for dynamically updating seismic hazard estimates and earthquake short term (from days to weeks) forecast, a preliminary step is to identify those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated to the complex process of preparation of a big earthquake. Among the different parameters, the fluctuations of Earth's thermally emitted radiation, as measured by sensors on board of satellite system operating in the Thermal Infra-Red (TIR) spectral range, have been proposed since long time as potential earthquake precursors. Since 2001, a general approach called Robust Satellite Techniques (RST) has been used to discriminate anomalous thermal signals, possibly associated to seismic activity from normal fluctuations of Earth's thermal emission related to other causes (e.g. meteorological) independent on the earthquake occurrence. Thanks to its full exportability on different satellite packages, RST has been implemented on TIR images acquired by polar (e.g. NOAA-AVHRR, EOS-MODIS) and geostationary (e.g. MSG-SEVIRI, NOAA-GOES/W, GMS-5/VISSR) satellite sensors, in order to verify the presence (or absence) of TIR anomalies in presence (absence) of earthquakes (with M>4) in different seismogenic areas around the world (e.g. Italy, Turkey, Greece, California, Taiwan, etc.).In this paper, a refined RST (Robust Satellite Techniques) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to identify Significant Sequences of TIR Anomalies (SSTAs) during eleven years (from May 2004 to December 2014) of TIR satellite records, collected over Italy by the geostationary satellite sensor MSG-SEVIRI. On the basis of specific validation rules (mainly based on physical models and results obtained by applying RST approach to several earthquakes all around the world) the level of space-time correlation among SSTAs and earthquakes (with M≥4

  6. Risk communication on earthquake prediction studies: Possible pitfalls of science communication

    NASA Astrophysics Data System (ADS)

    Oki, S.; Koketsu, K.

    2012-04-01

    The ANSA web news titled "'No L'Aquila quake risk' experts probed in Italy in June 2010" gave a shock to the Japanese seismological community. For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 on 30th March, the government held the Major Risks Committee, which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. According to this ANSA news, the committee did not insist on the risk of damaging earthquake at the press conference held after the committee. Six days later, however, a magnitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors started on the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee in the previous week. Lessons from this issue are of significant importance. Science communication is now in currency, and more efforts are made to reach out to the public and policy makers. But when we deal with disaster sciences, it contains a much bigger proportion of risk communication. A similar incident had happened with the outbreak of the BSE back in the late 1980's. Many of the measures taken according to the Southwood Committee are laudable, but for one - science back then could not show whether or not it was contagious to humans, and is written in the committee minutes that "it is unlikely to infect humans". If read thoroughly, it does refer to the risk, but since it had not been stressed, the government started a campaign saying that "UK beef is safe". In the presentation, we review the L'Aquila affair referring to our interviews to some of the committee members and the Civil Protection Department, and also introduce

  7. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    NASA Astrophysics Data System (ADS)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  8. NASA Products to Enhance Energy Utility Load Forecasting

    NASA Technical Reports Server (NTRS)

    Lough, G.; Zell, E.; Engel-Cox, J.; Fungard, Y.; Jedlovec, G.; Stackhouse, P.; Homer, R.; Biley, S.

    2012-01-01

    Existing energy load forecasting tools rely upon historical load and forecasted weather to predict load within energy company service areas. The shortcomings of load forecasts are often the result of weather forecasts that are not at a fine enough spatial or temporal resolution to capture local-scale weather events. This project aims to improve the performance of load forecasting tools through the integration of high-resolution, weather-related NASA Earth Science Data, such as temperature, relative humidity, and wind speed. Three companies are participating in operational testing one natural gas company, and two electric providers. Operational results comparing load forecasts with and without NASA weather forecasts have been generated since March 2010. We have worked with end users at the three companies to refine selection of weather forecast information and optimize load forecast model performance. The project will conclude in 2012 with transitioning documented improvements from the inclusion of NASA forecasts for sustained use by energy utilities nationwide in a variety of load forecasting tools. In addition, Battelle has consulted with energy companies nationwide to document their information needs for long-term planning, in light of climate change and regulatory impacts.

  9. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    parameters and seismic events. Further development of the H-104 method is the plotting of H-104 trajectories in two-dimensional time coordinates. The method provides the dates of future earthquakes for several (3-4) sequential time intervals multiple of 104 days. The H-104 method could be used together with the empirical scheme for short-term earthquake prediction reducing the date uncertainty. Using the H-104 method, it is developed the following long-term forecast of seismic activity. 1. The total number of M6+ earthquakes expected in the time frames: - 10.01-07.02: 14; - 08.02-08.03: 17; - 09.03-06.04: 9. 3. The potential days of M6+ earthquakes expected in the period of 10.01.2016-06.04.2016 are the following: - in January: 17, 18, 23, 24, 26, 28, 31; - in February: 01, 02, 05, 12, 15, 18, 20, 23; - in March: 02, 04, 05, 07 (M7+ is possible), 09, 10, 17 (M7+ is possible), 19, 20 (M7+ is possible), 23 (M7+ is possible), 30; - in April: 02, 06. The work was financially supported by the Ministry of Education and Science of the Russian Federation (contract No. 14.577.21.0109, project UID RFMEFI57714X0109)

  10. Earthquake Relocation in the Middle East with Geodetically-Calibrated Events

    NASA Astrophysics Data System (ADS)

    Brengman, C.; Barnhart, W. D.

    2017-12-01

    Regional and global earthquake catalogs in tectonically active regions commonly contain mislocated earthquakes that impede efforts to address first order characteristics of seismogenic strain release and to monitor anthropogenic seismic events through the Comprehensive Nuclear-Test-Ban Treaty. Earthquake mislocations are particularly limiting in the plate boundary zone between the Arabia and Eurasia plates of Iran, Pakistan, and Turkey where earthquakes are commonly mislocated by 20+ kilometers and hypocentral depths are virtually unconstrained. Here, we present preliminary efforts to incorporate calibrated earthquake locations derived from Interferometric Synthetic Aperture Radar (InSAR) observations into a relocated catalog of seismicity in the Middle East. We use InSAR observations of co-seismic deformation to determine the locations, geometries, and slip distributions of small to moderate magnitude (M4.8+) crustal earthquakes. We incorporate this catalog of calibrated event locations, along with other seismologically-calibrated earthquake locations, as "priors" into a fully Bayesian multi-event relocation algorithm that relocates all teleseismically and regionally recorded earthquakes over the time span 1970-2017, including calibrated and uncalibrated events. Our relocations are conducted using cataloged phase picks and BayesLoc. We present a suite of sensitivity tests for the time span of 2003-2014 to explore the impacts of our input parameters (i.e., how a point source is defined from a finite fault inversion) on the behavior of the event relocations, potential improvements to depth estimates, the ability of the relocation to recover locations outside of the time span in which there are InSAR observations, and the degree to which our relocations can recover "known" calibrated earthquake locations that are not explicitly included as a-priori constraints. Additionally, we present a systematic comparison of earthquake relocations derived from phase picks of two

  11. A Prototype Regional GSI-based EnKF-Variational Hybrid Data Assimilation System for the Rapid Refresh Forecasting System: Dual-Resolution Implementation and Testing Results

    NASA Astrophysics Data System (ADS)

    Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun

    2018-05-01

    A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.

  12. Seasonal forecast of St. Louis encephalitis virus transmission, Florida.

    PubMed

    Shaman, Jeffrey; Day, Jonathan F; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-05-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empiric relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill-verification analyses may be applied to test the predictability of an empiric disease forecast model.

  13. Seasonal Forecast of St. Louis Encephalitis Virus Transmission, Florida

    PubMed Central

    Day, Jonathan F.; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-01-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill verification analyses may be applied to test the predictability of an empirical disease forecast model. PMID:15200812

  14. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  15. Assessing the Utility of and Improving USGS Earthquake Hazards Program Products

    NASA Astrophysics Data System (ADS)

    Gomberg, J. S.; Scott, M.; Weaver, C. S.; Sherrod, B. L.; Bailey, D.; Gibbons, D.

    2010-12-01

    A major focus of the USGS Earthquake Hazards Program (EHP) has been the development and implementation of products and information meant to improve earthquake hazard assessment, mitigation and response for a myriad of users. Many of these products rely on the data and efforts of the EHP and its partner scientists who are building the Advanced National Seismic System (ANSS). We report on a project meant to assess the utility of many of these products and information, conducted collaboratively by EHP scientists and Pierce County Department of Emergency Management staff. We have conducted focus group listening sessions with members of the engineering, business, medical, media, risk management, and emergency response communities as well as participated in the planning and implementation of earthquake exercises in the Pacific Northwest. Thus far we have learned that EHP and ANSS products satisfy many of the needs of engineers and some planners, and information is widely used by media and the general public. However, some important communities do not use these products despite their intended application for their purposes, particularly county and local emergency management and business communities. We have learned that products need to convey more clearly the impact of earthquakes, in everyday terms. Users also want products (e.g. maps, forecasts, etc.) that can be incorporated into tools and systems they use regularly. Rather than simply building products and posting them on websites, products need to be actively marketed and training provided. We suggest that engaging users prior to and during product development will enhance their usage and effectiveness.

  16. Hazard Assessment and Early Warning of Tsunamis: Lessons from the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2012-12-01

    . Tsunami hazard assessments or long-term forecast of earthquakes have not considered such a triggering or simultaneous occurrence of different types of earthquakes. The large tsunami at the Fukushima nuclear power station was due to the combination of the deep and shallow slip. Disaster prevention for low-frequency but large-scale hazard must be considered. The Japanese government established a general policy to for two levels: L1 and L2. The L2 tsunamis are the largest possible tsunamis with low frequency of occurrence, but cause devastating disaster once they occur. For such events, saving people's lives is the first priority and soft measures such as tsunami hazard maps, evacuation facilities or disaster education will be prepared. The L1 tsunamis are expected to occur more frequently, typically once in a few decades, for which hard countermeasures such as breakwater must be prepared to protect lives and properties of residents as well as economic and industrial activities.

  17. Long-Term RST Analysis of Anomalous TIR Sequences in Relation with Earthquakes Occurred in Greece in the Period 2004-2013

    NASA Astrophysics Data System (ADS)

    Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Lisi, Mariano; Paciello, Rossana; Pergola, Nicola; Vallianatos, Filippos; Tramutoli, Valerio

    2016-01-01

    Real-time integration of multi-parametric observations is expected to accelerate the process toward improved, and operationally more effective, systems for time-Dependent Assessment of Seismic Hazard (t-DASH) and earthquake short-term (from days to weeks) forecast. However, a very preliminary step in this direction is the identification of those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated with the complex process of preparation for major earthquakes. In this paper one of these parameters (the Earth's emitted radiation in the Thermal InfraRed spectral region) is considered for its possible correlation with M ≥ 4 earthquakes occurred in Greece in between 2004 and 2013. The Robust Satellite Technique (RST) data analysis approach and Robust Estimator of TIR Anomalies (RETIRA) index were used to preliminarily define, and then to identify, significant sequences of TIR anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager on board the Meteosat Second Generation satellite. Taking into account the physical models proposed for justifying the existence of a correlation among TIR anomalies and earthquake occurrences, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability—CSEP—Project) have been defined to drive a retrospective correlation analysis process. The analysis shows that more than 93 % of all identified SSTAs occur in the prefixed space-time window around ( M ≥ 4) earthquake's time and location of occurrence with a false positive rate smaller than 7 %. Molchan error diagram analysis shows that such a correlation is far to be achievable by chance notwithstanding the huge amount of missed events due to frequent space/time data gaps produced by the presence of clouds over the scene. Achieved results, and particularly the very low rate of false positives registered

  18. Scientific, Engineering, and Financial Factors of the 1989 Human-Triggered Newcastle Earthquake in Australia

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2006-12-01

    earthquakes recorded in Australia between 1967 and 1999. In conclusion, the increasing number and size of geoengineering activities, such as coal mining near Newcastle or planned carbon dioxide Geosequestration initiatives, represent a growing hazard potential, which can negatively affect socio-economic growth and sustainable development. Finally, hazard and risk degrees, based on geomechanical-mathematical models, can be forecasted in space and over time for urban planning in order to prevent economic losses of human-triggered earthquakes in the future.

  19. Eruption-induced modifications to volcanic seismicity at Ruapehu, New Zealand, and its implications for eruption forecasting

    USGS Publications Warehouse

    Bryan, C.J.; Sherburn, S.

    2003-01-01

    Broadband seismic data collected on Ruapehu volcano, New Zealand, in 1994 and 1998 show that the 1995-1996 eruptions of Ruapehu resulted in a significant change in the frequency content of tremor and volcanic earthquakes at the volcano. The pre-eruption volcanic seismicity was characterized by several independent dominant frequencies, with a 2 Hz spectral peak dominating the strongest tremor and volcanic earthquakes and higher frequencies forming the background signal. The post-eruption volcanic seismicity was dominated by a 0.8-1.4 Hz spectral peak not seen before the eruptions. The 2 Hz and higher frequency signals remained, but were subordinate to the 0.8-1.4 Hz energy. That the dominant frequencies of volcanic tremor and volcanic earthquakes were identical during the individual time periods prior to and following the 1995-1996 eruptions suggests that during each of these time periods the volcanic tremor and earthquakes were generated by the same source process. The overall change in the frequency content, which occurred during the 1995-1996 eruptions and remains as of the time of the writing of this paper, most likely resulted from changes in the volcanic plumbing system and has significant implications for forecasting and real-time assessment of future eruptive activity at Ruapehu.

  20. Communicating uncertainty in hydrological forecasts: mission impossible?

    NASA Astrophysics Data System (ADS)

    Ramos, Maria-Helena; Mathevet, Thibault; Thielen, Jutta; Pappenberger, Florian

    2010-05-01

    Cascading uncertainty in meteo-hydrological modelling chains for forecasting and integrated flood risk assessment is an essential step to improve the quality of hydrological forecasts. Although the best methodology to quantify the total predictive uncertainty in hydrology is still debated, there is a common agreement that one must avoid uncertainty misrepresentation and miscommunication, as well as misinterpretation of information by users. Several recent studies point out that uncertainty, when properly explained and defined, is no longer unwelcome among emergence response organizations, users of flood risk information and the general public. However, efficient communication of uncertain hydro-meteorological forecasts is far from being a resolved issue. This study focuses on the interpretation and communication of uncertain hydrological forecasts based on (uncertain) meteorological forecasts and (uncertain) rainfall-runoff modelling approaches to decision-makers such as operational hydrologists and water managers in charge of flood warning and scenario-based reservoir operation. An overview of the typical flow of uncertainties and risk-based decisions in hydrological forecasting systems is presented. The challenges related to the extraction of meaningful information from probabilistic forecasts and the test of its usefulness in assisting operational flood forecasting are illustrated with the help of two case-studies: 1) a study on the use and communication of probabilistic flood forecasting within the European Flood Alert System; 2) a case-study on the use of probabilistic forecasts by operational forecasters from the hydroelectricity company EDF in France. These examples show that attention must be paid to initiatives that promote or reinforce the active participation of expert forecasters in the forecasting chain. The practice of face-to-face forecast briefings, focusing on sharing how forecasters interpret, describe and perceive the model output forecasted

  1. Stress history controls the spatial pattern of aftershocks: case studies from strike-slip earthquakes

    NASA Astrophysics Data System (ADS)

    Utkucu, Murat; Durmuş, Hatice; Nalbant, Süleyman

    2017-09-01

    Earthquake ruptures perturb stress within the surrounding crustal volume and as it is widely accepted now these stress perturbations strongly correlates with the following seismicity. Here we have documented five cases of the mainshock-aftershock sequences generated by the strike-slip faults from different tectonic environments of world in order to demonstrate that the stress changes resulting from large preceding earthquakes decades before effect spatial distribution of the aftershocks of the current mainshocks. The studied mainshock-aftershock sequences are the 15 October 1979 Imperial Valley earthquake ( Mw = 6.4) in southern California, the 27 November 1979 Khuli-Boniabad ( Mw = 7.1), the 10 May 1997 Qa'enat ( Mw = 7.2) and the 31 March 2006 Silakhor ( Mw = 6.1) earthquakes in Iran and the 13 March 1992 Erzincan earthquake ( Mw = 6.7) in Turkey. In the literature, we have been able to find only these mainshocks that are mainly characterized by dense and strong aftershock activities along and beyond the one end of their ruptures while rare aftershock occurrences with relatively lower magnitude reported for the other end of their ruptures. It is shown that the stress changes resulted from earlier mainshock(s) that are close in both time and space might be the reason behind the observed aftershock patterns. The largest aftershocks of the mainshocks studied tend to occur inside the stress-increased lobes that were also stressed by the background earthquakes and not to occur inside the stress-increased lobes that fall into the stress shadow of the background earthquakes. We suggest that the stress shadows of the previous mainshocks may persist in the crust for decades to suppress aftershock distribution of the current mainshocks. Considering active researches about use of the Coulomb stress change maps as a practical tool to forecast spatial distribution of the upcoming aftershocks for earthquake risk mitigation purposes in near-real time, it is further suggested

  2. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  3. Characterizing Time Series Data Diversity for Wind Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong

    Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less

  4. Improved Anvil Forecasting

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2000-01-01

    This report describes the outcome of Phase 1 of the AMU's Improved Anvil Forecasting task. Forecasters in the 45th Weather Squadron and the Spaceflight Meteorology Group have found that anvil forecasting is a difficult task when predicting LCC and FR violations. The purpose of this task is to determine the technical feasibility of creating an anvil-forecasting tool. Work on this study was separated into three steps: literature search, forecaster discussions, and determination of technical feasibility. The literature search revealed no existing anvil-forecasting techniques. However, there appears to be growing interest in anvils in recent years. If this interest continues to grow, more information will be available to aid in developing a reliable anvil-forecasting tool. The forecaster discussion step revealed an array of methods on how better forecasting techniques could be developed. The forecasters have ideas based on sound meteorological principles and personal experience in forecasting and analyzing anvils. Based on the information gathered in the discussions with the forecasters, the conclusion of this report is that it is technically feasible at this time to develop an anvil forecasting technique that will significantly contribute to the confidence in anvil forecasts.

  5. Seismic dynamics in advance and after the recent strong earthquakes in Italy and New Zealand

    NASA Astrophysics Data System (ADS)

    Nekrasova, A.; Kossobokov, V. G.

    2017-12-01

    We consider seismic events as a sequence of avalanches in self-organized system of blocks-and-faults of the Earth lithosphere and characterize earthquake series with the distribution of the control parameter, η = τ × 10B × (5-M) × L C of the Unified Scaling Law for Earthquakes, USLE (where τ is inter-event time, B is analogous to the Gutenberg-Richter b-value, and C is fractal dimension of seismic locus). A systematic analysis of earthquake series in Central Italy and New Zealand, 1993-2017, suggests the existence, in a long-term, of different rather steady levels of seismic activity characterized with near constant values of η, which, in mid-term, intermittently switch at times of transitions associated with the strong catastrophic events. On such a transition, seismic activity, in short-term, may follow different scenarios with inter-event time scaling of different kind, including constant, logarithmic, power law, exponential rise/decay or a mixture of those. The results do not support the presence of universality in seismic energy release. The observed variability of seismic activity in advance and after strong (M6.0+) earthquakes in Italy and significant (M7.0+) earthquakes in New Zealand provides important constraints on modelling realistic earthquake sequences by geophysicists and can be used to improve local seismic hazard assessments including earthquake forecast/prediction methodologies. The transitions of seismic regime in Central Italy and New Zealand started in 2016 are still in progress and require special attention and geotechnical monitoring. It would be premature to make any kind of definitive conclusions on the level of seismic hazard which is evidently high at this particular moment of time in both regions. The study supported by the Russian Science Foundation Grant No.16-17-00093.

  6. Pulverization provides a mechanism for the nucleation of earthquakes at low stress on strong faults

    USGS Publications Warehouse

    Felzer, Karen R.

    2014-01-01

    An earthquake occurs when rock that has been deformed under stress rebounds elastically along a fault plane (Gilbert, 1884; Reid, 1911), radiating seismic waves through the surrounding earth. Rupture along the entire fault surface does not spontaneously occur at the same time, however. Rather the rupture starts in one tiny area, the rupture nucleation zone, and spreads sequentially along the fault. Like a row of dominoes, one bit of rebounding fault triggers the next. This triggering is understood to occur because of the large dynamic stresses at the tip of an active seismic rupture. The importance of these crack tip stresses is a central question in earthquake physics. The crack tip stresses are minimally important, for example, in the time predictable earthquake model (Shimazaki and Nakata, 1980), which holds that prior to rupture stresses are comparable to fault strength in many locations on the future rupture plane, with bits of variation. The stress/strength ratio is highest at some point, which is where the earthquake nucleates. This model does not require any special conditions or processes at the nucleation site; the whole fault is essentially ready for rupture at the same time. The fault tip stresses ensure that the rupture occurs as a single rapid earthquake, but the fact that fault tip stresses are high is not particularly relevant since the stress at most points does not need to be raised by much. Under this model it should technically be possible to forecast earthquakes based on the stress-renewaql concept, or estimates of when the fault as a whole will reach the critical stress level, a practice used in official hazard mapping (Field, 2008). This model also indicates that physical precursors may be present and detectable, since stresses are unusually high over a significant area before a large earthquake.

  7. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    conditions for response and gauge the effect of each variable on the relationship between earthquakes and volcanic activity. Finally, a volcanic forecast model will be assessed to evaluate the use of earthquakes as a precursory indicator to volcanic activity. If proven, the relationship between earthquakes and volcanic activity has the potential to aid our understanding of the conditions that influence triggering following an earthquake and provide vital clues for volcanic activity prediction and the identification of precursors. Hill-Butler, C.; Blackett, M.; Wright, R. and Trodd, N. (2014) Global Heat Flux Response to Large Earthquakes in the 21st Century. Geology in preparation. Kaufman, Y. J.; Justice, C.; Flynn, L.; Kendall, J.; Prins, E.; Ward, D. E.; Menzel, P. and Setzer, A. (1998) Monitoring Global Fires from EOS-MODIS. Journal of Geophysical Research 103, 32,215-32,238 Wright, R.; Blackett, M. and Hill-Butler, C. (2014) Some observations regarding the thermal flux from Earth's erupting volcanoes for the period 2000 to 2014. Geophysical Research Letters in review.

  8. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  9. The analysis on the characteristics of North Korea 6th nuclear test, collapse and induced earthquakes

    NASA Astrophysics Data System (ADS)

    Park, J. H.; Park, Y. K.; Kim, T. S.; Kim, G.; Cho, C.; Kim, I.

    2017-12-01

    North Korea(NK) has conducted the 6th Underground Nuclear Test(UNT) with the one order bigger magnitude than previous ones on 3 Sep. 2017. By using correlated waveform comparison the estimated epicenter of the 6th NK UNT was estimated at 41.3020N 129.0795E located about 200 m toward northern direction from the previous 5th NK UNT site. The body wave magnitude was calculated as mb 5.7 through our routine process measuring the maximum amplitude of P wave in the higher frequency over 1 Hz using stations around the Korean peninsula, however, this could be underestimated in the case that the source energy spectra of UNT radiated dominantly in the lower frequency below 1 Hz. Considering source spectra of the 6th NK UNT, we applied to P wave the 2nd order Butterworth bandpass filter between 0.1 and 1 Hz and measured that the amplitude ratio of 6th/5th UNT. Instead of 6 7 ratio from the raw P waves, the filtered amplitude ratio resulted in 10 12 at several stations. After cross check of the amplitude ratio in bandpass filtered method to the previous NK UNT we finalized the magnitude of the 6th NK UNT as mb 6.1. The collapse earthquake has happened after the 6th NK UNT about 8 minutes 32 seconds and the epicenter estimated to be located around the UNT site within 1 km. The similarity of wave forms to that of the two mine collapse cases in South Korea and moment tensor inversion indicated the source mechanism was very similar to the mine collapse. Three earthquakes were detected and analyzed locations and magnitudes, we thought these earthquakes were induced from the accumulated tectonic stress by the NK UNT. The collapse event's wave forms are very different from those of the induced earthquakes.

  10. Dynamic 3D simulations of earthquakes on en echelon faults

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    1999-01-01

    One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.

  11. Forecasting natural hazards, performance of scientists, ethics, and the need for transparency

    PubMed Central

    Guzzetti, Fausto

    2016-01-01

    Landslides are one of several natural hazards. As other natural hazards, landslides are difficult to predict, and their forecasts are uncertain. The uncertainty depends on the poor understanding of the phenomena that control the slope failures, and on the inherent complexity and chaotic nature of the landslides. This is similar to other natural hazards, including hurricanes, earthquakes, volcanic eruptions, floods, and droughts. Due to the severe impact of landslides on the population, the environment, and the economy, forecasting landslides is of scientific interest and of societal relevance, and scientists attempting to forecast landslides face known and new problems intrinsic to the multifaceted interactions between science, decision-making, and the society. The problems include deciding on the authority and reliability of individual scientists and groups of scientists, and evaluating the performances of individual scientists, research teams, and their institutions. Related problems lay in the increasing subordination of research scientists to politics and decision-makers, and in the conceptual and operational models currently used to organize and pay for research, based on apparently objective criteria and metrics, considering science as any other human endeavor, and favoring science that produces results of direct and immediate application. The paper argues that the consequences of these problems have not been considered fully. PMID:27695154

  12. Forecasting natural hazards, performance of scientists, ethics, and the need for transparency.

    PubMed

    Guzzetti, Fausto

    2016-10-20

    Landslides are one of several natural hazards. As other natural hazards, landslides are difficult to predict, and their forecasts are uncertain. The uncertainty depends on the poor understanding of the phenomena that control the slope failures, and on the inherent complexity and chaotic nature of the landslides. This is similar to other natural hazards, including hurricanes, earthquakes, volcanic eruptions, floods, and droughts. Due to the severe impact of landslides on the population, the environment, and the economy, forecasting landslides is of scientific interest and of societal relevance, and scientists attempting to forecast landslides face known and new problems intrinsic to the multifaceted interactions between science, decision-making, and the society. The problems include deciding on the authority and reliability of individual scientists and groups of scientists, and evaluating the performances of individual scientists, research teams, and their institutions. Related problems lay in the increasing subordination of research scientists to politics and decision-makers, and in the conceptual and operational models currently used to organize and pay for research, based on apparently objective criteria and metrics, considering science as any other human endeavor, and favoring science that produces results of direct and immediate application. The paper argues that the consequences of these problems have not been considered fully.

  13. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  14. Earthquake simulator tests and associated study of an 1/6-scale nine-story RC model

    NASA Astrophysics Data System (ADS)

    Sun, Jingjiang; Wang, Tao; Qi, Hu

    2007-09-01

    Earthquake simulator tests of a 1/6-scale nine-story reinforced concrete frame-wall model are described in the paper. The test results and associated numerical simulation are summarized and discussed. Based on the test data, a relationship between maximum inter-story drift and damage state is established. Equations of variation of structural characteristics (natural frequency and equivalent stiffness) with overall drifts are derived by data fitting, which can be used to estimate structural damage state if structural characteristics can be measured. A comparison of the analytical and experimental results show that both the commonly used equivalent beam and fiber element models can simulate the nonlinear seismic response of structures very well. Finally, conclusions associated with seismic design and damage evaluation of RC structures are presented.

  15. Strategic crisis and risk communication during a prolonged natural hazard event: lessons learned from the Canterbury earthquake sequence

    NASA Astrophysics Data System (ADS)

    Wein, A. M.; Potter, S.; Becker, J.; Doyle, E. E.; Jones, J. L.

    2015-12-01

    While communication products are developed for monitoring and forecasting hazard events, less thought may have been given to crisis and risk communication plans. During larger (and rarer) events responsible science agencies may find themselves facing new and intensified demands for information and unprepared for effectively resourcing communications. In a study of the communication of aftershock information during the 2010-12 Canterbury Earthquake Sequence (New Zealand), issues are identified and implications for communication strategy noted. Communication issues during the responses included reliability and timeliness of communication channels for immediate and short decision time frames; access to scientists by those who needed information; unfamiliar emergency management frameworks; information needs of multiple audiences, audience readiness to use the information; and how best to convey empathy during traumatic events and refer to other information sources about what to do and how to cope. Other science communication challenges included meeting an increased demand for earthquake education, getting attention on aftershock forecasts; responding to rumor management; supporting uptake of information by critical infrastructure and government and for the application of scientific information in complex societal decisions; dealing with repetitive information requests; addressing diverse needs of multiple audiences for scientific information; and coordinating communications within and outside the science domain. For a science agency, a communication strategy would consider training scientists in communication, establishing relationships with university scientists and other disaster communication roles, coordinating messages, prioritizing audiences, deliberating forecasts with community leaders, identifying user needs and familiarizing them with the products ahead of time, and practicing the delivery and use of information via scenario planning and exercises.

  16. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  17. Weather Forecaster Understanding of Climate Models

    NASA Astrophysics Data System (ADS)

    Bol, A.; Kiehl, J. T.; Abshire, W. E.

    2013-12-01

    Weather forecasters, particularly those in broadcasting, are the primary conduit to the public for information on climate and climate change. However, many weather forecasters remain skeptical of model-based climate projections. To address this issue, The COMET Program developed an hour-long online lesson of how climate models work, targeting an audience of weather forecasters. The module draws on forecasters' pre-existing knowledge of weather, climate, and numerical weather prediction (NWP) models. In order to measure learning outcomes, quizzes were given before and after the lesson. Preliminary results show large learning gains. For all people that took both pre and post-tests (n=238), scores improved from 48% to 80%. Similar pre/post improvement occurred for National Weather Service employees (51% to 87%, n=22 ) and college faculty (50% to 90%, n=7). We believe these results indicate a fundamental misunderstanding among many weather forecasters of (1) the difference between weather and climate models, (2) how researchers use climate models, and (3) how they interpret model results. The quiz results indicate that efforts to educate the public about climate change need to include weather forecasters, a vital link between the research community and the general public.

  18. Natural Time, Nowcasting and the Physics of Earthquakes: Estimation of Seismic Risk to Global Megacities

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Luginbuhl, Molly; Giguere, Alexis; Turcotte, Donald L.

    2018-02-01

    Natural Time ("NT") refers to the concept of using small earthquake counts, for example of M > 3 events, to mark the intervals between large earthquakes, for example M > 6 events. The term was first used by Varotsos et al. (2005) and later by Holliday et al. (2006) in their studies of earthquakes. In this paper, we discuss ideas and applications arising from the use of NT to understand earthquake dynamics, in particular by use of the idea of nowcasting. Nowcasting differs from forecasting, in that the goal of nowcasting is to estimate the current state of the system, rather than the probability of a future event. Rather than focus on an individual earthquake faults, we focus on a defined local geographic region surrounding a particular location. This local region is considered to be embedded in a larger regional setting from which we accumulate the relevant statistics. We apply the nowcasting idea to the practical development of methods to estimate the current state of risk for dozens of the world's seismically exposed megacities, defined as cities having populations of over 1 million persons. We compute a ranking of these cities based on their current nowcast value, and discuss the advantages and limitations of this approach. We note explicitly that the nowcast method is not a model, in that there are no free parameters to be fit to data. Rather, the method is simply a presentation of statistical data, which the user can interpret. Among other results, we find, for example, that the current nowcast ranking of the Los Angeles region is comparable to its ranking just prior to the January 17, 1994 Northridge earthquake.

  19. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  20. Evolving forecasting classifications and applications in health forecasting

    PubMed Central

    Soyiri, Ireneous N; Reidpath, Daniel D

    2012-01-01

    Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation. PMID:22615533