Sample records for earthquake prediction method

  1. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  2. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  3. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  4. Earthquake prediction evaluation standards applied to the VAN Method

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Earthquake prediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predicted earthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predicted earthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predicted earthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.

  5. A note on evaluating VAN earthquake predictions

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-Akis; Melis, Nicos S.

    The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

  6. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  7. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  8. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  9. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, Susan E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  10. Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  11. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  12. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  13. A Cooperative Test of the Load/Unload Response Ratio Proposed Method of Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Trotta, J. E.; Tullis, T. E.

    2004-12-01

    The Load/Unload Response Ratio (LURR) method is a proposed technique to predict earthquakes that was first put forward by Yin in 1984 (Yin, 1987). LURR is based on the idea that when a region is near failure, there is an increase in the rate of seismic activity during loading of the tidal cycle relative to the rate of seismic activity during unloading of the tidal cycle. Typically the numerator of the LURR ratio is the number, or the sum of some measure of the size (e.g. Benioff strain), of small earthquakes that occur during loading of the tidal cycle, whereas the denominator is the same as the numerator except it is calculated during unloading. LURR method suggests this ratio should increase in the months to year preceding a large earthquake. Regions near failure have tectonic stresses nearly high enough for a large earthquake to occur, thus it seems more likely that smaller earthquakes in the region would be triggered when the tidal stresses add to the tectonic ones. However, until recently even the most careful studies suggested that the effect of tidal stresses on earthquake occurrence is very small and difficult to detect. New studies have shown that there is a tidal triggering effect on shallow thrust faults in areas with strong tides from ocean loading (Tanaka et al., 2002; Cochran et al., 2004). We have been conducting an independent test of the LURR method, since there would be important scientific and social implications if the LURR method were proven to be a robust method of earthquake prediction. Smith and Sammis (2003) also undertook a similar study. Following both the parameters of Yin et al. (2000) and the somewhat different ones of Smith and Sammis (2003), we have repeated calculations of LURR for the Northridge and Loma Prieta earthquakes in California. Though we have followed both sets of parameters closely, we have been unable to reproduce either set of results. A general agreement was made at the recent ACES Workshop in China between research

  14. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    NASA Astrophysics Data System (ADS)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  15. Purposes and methods of scoring earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2010-12-01

    There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.

  16. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes

  17. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  18. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  19. Earthquake predictions using seismic velocity ratios

    USGS Publications Warehouse

    Sherburne, R. W.

    1979-01-01

    Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency. 

  20. Earthquake Prediction is Coming

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)

  1. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

  2. The nature of earthquake prediction

    USGS Publications Warehouse

    Lindh, A.G.

    1991-01-01

    Earthquake prediction is inherently statistical. Although some people continue to think of earthquake prediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear for at least a decade that this is an unrealistic and unreasonable definition. the reality is that earthquake prediction starts from the long-term forecasts of place and magnitude, with very approximate time constraints, and progresses, at least in principle, to a gradual narrowing of the time window as data and understanding permit. Primitive long-term forecasts are clearly possible at this time on a few well-characterized fault systems. Tightly focuses monitoring experiments aimed at short-term prediction are already underway in Parkfield, California, and in the Tokai region in Japan; only time will tell how much progress will be possible. 

  3. VAN method of short-term earthquake prediction shows promise

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  4. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  5. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    contribution for the prediction of the number (and concentration) of induced landslides. This, for instance, partly explains why the Wenchuan 2008 earthquake triggered far more landslides than the Nepal 2015 earthquake. Moreover, according to our prediction the most severe earthquake-triggered landslide event would have been the Assam 1950 earthquake (India), followed by the 2008 Wenchuan earthquake. Regarding the overall performance of our prediction method it can be seen that the number of landslides is overestimated for a series of earthquakes, while the size of the affected area is often underestimated. Especially for older events the incompleteness of the published catalogues can partly explain the overestimation of the landslide numbers. The underestimation of the affected area however is real and must be attributed to particular remote effects of earthquakes.

  6. Four Examples of Short-Term and Imminent Prediction of Earthquakes

    NASA Astrophysics Data System (ADS)

    zeng, zuoxun; Liu, Genshen; Wu, Dabin; Sibgatulin, Victor

    2014-05-01

    We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same

  7. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  8. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  9. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  10. Stigma in science: the case of earthquake prediction.

    PubMed

    Joffe, Helene; Rossetto, Tiziana; Bradley, Caroline; O'Connor, Cliodhna

    2018-01-01

    This paper explores how earthquake scientists conceptualise earthquake prediction, particularly given the conviction of six earthquake scientists for manslaughter (subsequently overturned) on 22 October 2012 for having given inappropriate advice to the public prior to the L'Aquila earthquake of 6 April 2009. In the first study of its kind, semi-structured interviews were conducted with 17 earthquake scientists and the transcribed interviews were analysed thematically. The scientists primarily denigrated earthquake prediction, showing strong emotive responses and distancing themselves from earthquake 'prediction' in favour of 'forecasting'. Earthquake prediction was regarded as impossible and harmful. The stigmatisation of the subject is discussed in the light of research on boundary work and stigma in science. The evaluation reveals how mitigation becomes the more favoured endeavour, creating a normative environment that disadvantages those who continue to pursue earthquake prediction research. Recommendations are made for communication with the public on earthquake risk, with a focus on how scientists portray uncertainty. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.

  11. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  12. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  13. Earthquake Prediction in a Big Data World

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2016-12-01

    The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquake prediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of Earthquake Prediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance

  14. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  15. Triggering Factor of Strong Earthquakes and Its Prediction Verification

    NASA Astrophysics Data System (ADS)

    Ren, Z. Q.; Ren, S. H.

    After 30 yearsS research, we have found that great earthquakes are triggered by tide- generation force of the moon. ItSs not the tide-generation force in classical view- points, but is a non-classical viewpoint tide-generation force. We call it as TGFR (Tide-Generation ForcesS Resonance). TGFR strongly depends on the tide-generation force at time of the strange astronomical points (SAP). The SAP mostly are when the moon and another celestial body are arranged with the earth along a straight line (with the same apparent right ascension or 180o difference), the other SAP are the turning points of the moonSs relatively motion to the earth. Moreover, TGFR have four different types effective areas. Our study indicates that a majority of earthquakes are triggering by the rare superimposition of TGFRsS effective areas. In China the great earthquakes in the plain area of Hebei Province, Taiwan, Yunnan Province and Sichuan province are trigger by the decompression TGFR; Other earthquakes are trig- gered by compression TGFR which are in Gansu Province, Ningxia Provinces and northwest direction of Beijing. The great earthquakes in Japan, California, southeast of Europe also are triggered by compression of the TGFR. and in the other part of the world like in Philippines, Central America countries, and West Asia, great earthquakes are triggered by decompression TGFR. We have carried out examinational immediate prediction cooperate TGFR method with other earthquake impending signals such as suggested by Professor Li Junzhi. The successful ratio is about 40%(from our fore- cast reports to the China Seismological Administration). Thus we could say the great earthquake can be predicted (include immediate earthquake prediction). Key words: imminent prediction; triggering factor; TGFR (Tide-Generation ForcesS Resonance); TGFR compression; TGFR compression zone; TGFR decompression; TGFR decom- pression zone

  16. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  17. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  18. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  19. Signals of ENPEMF Used in Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Hao, G.; Dong, H.; Zeng, Z.; Wu, G.; Zabrodin, S. M.

    2012-12-01

    The signals of Earth's natural pulse electromagnetic field (ENPEMF) is a combination of the abnormal crustal magnetic field pulse affected by the earthquake, the induced field of earth's endogenous magnetic field, the induced magnetic field of the exogenous variation magnetic field, geomagnetic pulsation disturbance and other energy coupling process between sun and earth. As an instantaneous disturbance of the variation field of natural geomagnetism, ENPEMF can be used to predict earthquakes. This theory was introduced by A.A Vorobyov, who expressed a hypothesis that pulses can arise not only in the atmosphere but within the Earth's crust due to processes of tectonic-to-electric energy conversion (Vorobyov, 1970; Vorobyov, 1979). The global field time scale of ENPEMF signals has specific stability. Although the wave curves may not overlap completely at different regions, the smoothed diurnal ENPEMF patterns always exhibit the same trend per month. The feature is a good reference for observing the abnormalities of the Earth's natural magnetic field in a specific region. The frequencies of the ENPEMF signals generally locate in kilo Hz range, where frequencies within 5-25 kilo Hz range can be applied to monitor earthquakes. In Wuhan, the best observation frequency is 14.5 kilo Hz. Two special devices are placed in accordance with the S-N and W-E direction. Dramatic variation from the comparison between the pulses waveform obtained from the instruments and the normal reference envelope diagram should indicate high possibility of earthquake. The proposed detection method of earthquake based on ENPEMF can improve the geodynamic monitoring effect and can enrich earthquake prediction methods. We suggest the prospective further researches are about on the exact sources composition of ENPEMF signals, the distinction between noise and useful signals, and the effect of the Earth's gravity tide and solid tidal wave. This method may also provide a promising application in

  20. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).

  1. Geochemical challenge to earthquake prediction.

    PubMed Central

    Wakita, H

    1996-01-01

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665

  2. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  3. Assessing the capability of numerical methods to predict earthquake ground motion: the Euroseistest verification and validation project

    NASA Astrophysics Data System (ADS)

    Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.

    2009-12-01

    During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions

  4. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  5. 76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ... DEPARTMENT OF THE INTERIOR U.S. Geological Survey National Earthquake Prediction Evaluation... 96-472, the National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\-day meeting.... Geological Survey on proposed earthquake predictions, on the completeness and scientific validity of the...

  6. 76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey, Interior. ACTION: Notice of meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake Prediction Evaluation Council... proposed earthquake predictions, on the completeness and scientific validity of the available data related...

  7. The October 1992 Parkfield, California, earthquake prediction

    USGS Publications Warehouse

    Langbein, J.

    1992-01-01

    A magnitude 4.7 earthquake occurred near Parkfield, California, on October 20, 992, at 05:28 UTC (October 19 at 10:28 p.m. local or Pacific Daylight Time).This moderate shock, interpreted as the potential foreshock of a damaging earthquake on the San Andreas fault, triggered long-standing federal, state and local government plans to issue a public warning of an imminent magnitude 6 earthquake near Parkfield. Although the predicted earthquake did not take place, sophisticated suites of instruments deployed as part of the Parkfield Earthquake Prediction Experiment recorded valuable data associated with an unusual series of events. this article describes the geological aspects of these events, which occurred near Parkfield in October 1992. The accompnaying article, an edited version of a press conference b Richard Andrews, the Director of the California Office of Emergency Service (OES), describes governmental response to the prediction.   

  8. On Earthquake Prediction in Japan

    PubMed Central

    UYEDA, Seiya

    2013-01-01

    Japan’s National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  9. A numerical simulation strategy on occupant evacuation behaviors and casualty prediction in a building during earthquakes

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai

    2018-01-01

    Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

  10. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  11. Possibility of Earthquake-prediction by analyzing VLF signals

    NASA Astrophysics Data System (ADS)

    Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta

    2016-07-01

    Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

  12. Material contrast does not predict earthquake rupture propagation direction

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    2005-01-01

    Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

  13. Testing an Earthquake Prediction Algorithm: The 2016 New Zealand and Chile Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2017-05-01

    The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquake prediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

  14. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when

  15. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  16. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  17. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  18. Measurement of neutron and charged particle fluxes toward earthquake prediction

    NASA Astrophysics Data System (ADS)

    Maksudov, Asatulla U.; Zufarov, Mars A.

    2017-12-01

    In this paper, we describe a possible method for predicting the earthquakes, which is based on simultaneous recording of the intensity of fluxes of neutrons and charged particles by detectors, commonly used in nuclear physics. These low-energy particles originate from radioactive nuclear processes in the Earth's crust. The variations in the particle flux intensity can be the precursor of the earthquake. A description is given of an electronic installation that records the fluxes of charged particles in the radial direction, which are a possible response to the accumulated tectonic stresses in the Earth's crust. The obtained results showed an increase in the intensity of the fluxes for 10 or more hours before the occurrence of the earthquake. The previous version of the installation was able to indicate for the possibility of an earthquake (Maksudov et al. in Instrum Exp Tech 58:130-131, 2015), but did not give information about the direction of the epicenter location. In this regard, the installation was modified by adding eight directional detectors. With the upgraded setup, we have received both the predictive signals, and signals determining the directions of the location of the forthcoming earthquake, starting 2-3 days before its origin.

  19. Earthquake prediction in seismogenic areas of the Iberian Peninsula based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.

    2013-05-01

    A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity

  20. Discussion of New Approaches to Medium-Short-Term Earthquake Forecast in Practice of The Earthquake Prediction in Yunnan

    NASA Astrophysics Data System (ADS)

    Hong, F.

    2017-12-01

    After retrospection of years of practice of the earthquake prediction in Yunnan area, it is widely considered that the fixed-point earthquake precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of earthquakes, however it is difficult to obtain the spatial relevance between earthquakes and precursory anomalies, thus we can hardly predict the spatial locations of earthquakes using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predicting earthquakes locations, resulting from the increased seismicity were observed before 80% M=6.0 earthquakes in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predicting earthquakes locations in recent year, for instance, the forecasted earthquakes occurring time and area derived form the 1-year-scale geomagnetic anomalies before the M6.5 Ludian earthquake in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term earthquake forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small earthquakes.

  1. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries.

    PubMed

    Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.

  2. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries

    PubMed Central

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

  3. 75 FR 63854 - National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... DEPARTMENT OF THE INTERIOR Geological Survey National Earthquake Prediction Evaluation Council...: Pursuant to Public Law 96-472, the National Earthquake Prediction Evaluation Council (NEPEC) will hold a 2... proposed earthquake predictions, on the completeness and scientific validity of the available data related...

  4. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  5. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as

  6. Earthquake prediction: the interaction of public policy and science.

    PubMed Central

    Jones, L M

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

  7. Study on China’s Earthquake Prediction by Mathematical Analysis and its Application in Catastrophe Insurance

    NASA Astrophysics Data System (ADS)

    Jianjun, X.; Bingjie, Y.; Rongji, W.

    2018-03-01

    The purpose of this paper was to improve catastrophe insurance level. Firstly, earthquake predictions were carried out using mathematical analysis method. Secondly, the foreign catastrophe insurances’ policies and models were compared. Thirdly, the suggestions on catastrophe insurances to China were discussed. The further study should be paid more attention on the earthquake prediction by introducing big data.

  8. The Ordered Network Structure and Prediction Summary for M≥7 Earthquakes in Xinjiang Region of China

    NASA Astrophysics Data System (ADS)

    Men, Ke-Pei; Zhao, Kai

    2014-12-01

    M ≥7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a × k (k = 1,2,3), 11 12 a, 41 43 a, 18 19 a, and 5 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019 - 2020 and 2025 - 2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  9. Using remote sensing to predict earthquake impacts

    NASA Astrophysics Data System (ADS)

    Fylaktos, Asimakis; Yfantidou, Anastasia

    2017-09-01

    Natural hazards like earthquakes can result to enormous property damage, and human casualties in mountainous areas. Italy has always been exposed to numerous earthquakes, mostly concentrated in central and southern regions. Last year, two seismic events near Norcia (central Italy) have occurred, which led to substantial loss of life and extensive damage to properties, infrastructure and cultural heritage. This research utilizes remote sensing products and GIS software, to provide a database of information. We used both SAR images of Sentinel 1A and optical imagery of Landsat 8 to examine the differences of topography with the aid of the multi temporal monitoring technique. This technique suits for the observation of any surface deformation. This database is a cluster of information regarding the consequences of the earthquakes in groups, such as property and infrastructure damage, regional rifts, cultivation loss, landslides and surface deformations amongst others, all mapped on GIS software. Relevant organizations can implement these data in order to calculate the financial impact of these types of earthquakes. In the future, we can enrich this database including more regions and enhance the variety of its applications. For instance, we could predict the future impacts of any type of earthquake in several areas, and design a preliminarily model of emergency for immediate evacuation and quick recovery response. It is important to know how the surface moves, in particular geographical regions like Italy, Cyprus and Greece, where earthquakes are so frequent. We are not able to predict earthquakes, but using data from this research, we may assess the damage that could be caused in the future.

  10. Sociological aspects of earthquake prediction

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Henry Spall talked recently with Denis Mileti who is in the Department of Sociology, Colorado State University, Fort Collins, Colo. Dr. Mileti is a sociologst involved with research programs that study the socioeconomic impact of earthquake prediction

  11. Earthquake prediction in Japan and natural time analysis of seismicity

    NASA Astrophysics Data System (ADS)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  12. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  13. Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.

    2012-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or

  14. Study of Earthquake Disaster Prediction System of Langfang city Based on GIS

    NASA Astrophysics Data System (ADS)

    Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei

    2017-07-01

    In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

  15. Risk and return: evaluating Reverse Tracing of Precursors earthquake predictions

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2010-09-01

    In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

  16. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  17. Predictability of population displacement after the 2010 Haiti earthquake

    PubMed Central

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-01-01

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people’s movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people’s movements would have become less predictable. Instead, the predictability of people’s trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought. PMID:22711804

  18. Earthquake Forecasting Methodology Catalogue - A collection and comparison of the state-of-the-art in earthquake forecasting and prediction methodologies

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2015-04-01

    Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

  19. Earthquake Hazard Analysis Methods: A Review

    NASA Astrophysics Data System (ADS)

    Sari, A. M.; Fakhrurrozi, A.

    2018-02-01

    One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.

  20. Sun-earth environment study to understand earthquake prediction

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.

    2007-05-01

    Earthquake prediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquake prediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

  1. The susceptibility analysis of landslides induced by earthquake in Aso volcanic area, Japan, scoping the prediction

    NASA Astrophysics Data System (ADS)

    Kubota, Tetsuya; Takeda, Tsuyoshi

    2017-04-01

    Kumamoto earthquake on April 16th 2016 in Kumamoto prefecture, Kyushu Island, Japan with intense seismic scale of M7.3 (maximum acceleration = 1316 gal in Aso volcanic region) yielded countless instances of landslide and debris flow that induced serious damages and causalities in the area, especially in the Aso volcanic mountain range. Hence, field investigation and numerical slope stability analysis were conducted to delve into the characteristics or the prediction factors of the landslides induced by this earthquake. For the numerical analysis, Finite Element Method (FEM) and CSSDP (Critical Slip Surface analysis by Dynamic Programming theory based on limit equilibrium method) were applied to the landslide slopes with seismic acceleration observed. These numerical analysis methods can automatically detect the landslide slip surface which has minimum Fs (factor of safety). The various results and the information obtained through this investigation and analysis were integrated to predict the landslide susceptible slopes in volcanic area induced by earthquakes and rainfalls of their aftermath, considering geologic-geomorphologic features, geo-technical characteristics of the landslides and vegetation effects on the slope stability. Based on the FEM or CSSDP results, the landslides occurred in this earthquake at the mild gradient slope on the ridge have the safety factor of slope Fs=2.20 approximately (without rainfall nor earthquake, and Fs>=1.0 corresponds to stable slope without landslide) and 1.78 2.10 (with the most severe rainfall in the past) while they have approximately Fs=0.40 with the seismic forces in this earthquake (horizontal direction 818 gal, vertical direction -320 gal respectively, observed in the earthquake). It insists that only in case of earthquakes the landslide in volcanic sediment apt to occur at the mild gradient slopes as well as on the ridges with convex cross section. Consequently, the following results are obtained. 1) At volcanic

  2. Spatio-Temporal Fluctuations of the Earthquake Magnitude Distribution: Robust Estimation and Predictive Power

    NASA Astrophysics Data System (ADS)

    Olsen, S.; Zaliapin, I.

    2008-12-01

    We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquake prediction and seismic hazard assessment methods.

  3. Testing for the 'predictability' of dynamically triggered earthquakes in The Geysers geothermal field

    NASA Astrophysics Data System (ADS)

    Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne

    2018-03-01

    The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is 'predictable' or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily 'predictable' in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock-aftershock sequences. Thus, we may be able to 'predict' what size earthquakes to expect at The Geysers following a large distant earthquake.

  4. Earthquake prediction research at the Seismological Laboratory, California Institute of Technology

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquake prediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below. 

  5. Dim prospects for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  6. Turning the rumor of May 11, 2011 earthquake prediction In Rome, Italy, into an information day on earthquake hazard

    NASA Astrophysics Data System (ADS)

    Amato, A.; Cultrera, G.; Margheriti, L.; Nostro, C.; Selvaggi, G.; INGVterremoti Team

    2011-12-01

    A devastating earthquake had been predicted for May 11, 2011 in Rome. This prediction was never released officially by anyone, but it grew up in the Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions. Indeed, around May 11, 2011, a planetary alignment was really expected and this contributed to give credibility to the earthquake prediction among people. During the previous months, INGV was overwhelmed with requests for information about this supposed prediction by Roman inhabitants and tourists. Given the considerable mediatic impact of this expected earthquake, INGV decided to organize an Open Day in its headquarter in Rome for people who wanted to learn more about the Italian seismicity and the earthquake as natural phenomenon. The Open Day was preceded by a press conference two days before, in which we talked about this prediction, we presented the Open Day, and we had a scientific discussion with journalists about the earthquake prediction and more in general on the real problem of seismic risk in Italy. About 40 journalists from newspapers, local and national tv's, press agencies and web news attended the Press Conference and hundreds of articles appeared in the following days, advertising the 11 May Open Day. The INGV opened to the public all day long (9am - 9pm) with the following program: i) meetings with INGV researchers to discuss scientific issues; ii) visits to the seismic monitoring room, open 24h/7 all year; iii) guided tours through interactive exhibitions on earthquakes and Earth's deep structure; iv) lectures on general topics from the social impact of rumors to seismic risk reduction; v) 13 new videos on channel YouTube.com/INGVterremoti to explain the earthquake process and give updates on various aspects of seismic monitoring in Italy; vi) distribution of books and brochures. Surprisingly, more than 3000 visitors came to visit INGV

  7. Earthquake prediction with electromagnetic phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayakawa, Masashi, E-mail: hayakawa@hi-seismo-em.jp; Advanced Wireless & Communications Research Center, UEC, Chofu Tokyo; Earthquake Analysis Laboratory, Information Systems Inc., 4-8-15, Minami-aoyama, Minato-ku, Tokyo, 107-0062

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQsmore » prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.« less

  8. Gambling score in earthquake prediction analysis

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  9. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  10. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  11. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  12. Empirical prediction for travel distance of channelized rock avalanches in the Wenchuan earthquake area

    NASA Astrophysics Data System (ADS)

    Zhan, Weiwei; Fan, Xuanmei; Huang, Runqiu; Pei, Xiangjun; Xu, Qiang; Li, Weile

    2017-06-01

    Rock avalanches are extremely rapid, massive flow-like movements of fragmented rock. The travel path of the rock avalanches may be confined by channels in some cases, which are referred to as channelized rock avalanches. Channelized rock avalanches are potentially dangerous due to their difficult-to-predict travel distance. In this study, we constructed a dataset with detailed characteristic parameters of 38 channelized rock avalanches triggered by the 2008 Wenchuan earthquake using the visual interpretation of remote sensing imagery, field investigation and literature review. Based on this dataset, we assessed the influence of different factors on the runout distance and developed prediction models of the channelized rock avalanches using the multivariate regression method. The results suggested that the movement of channelized rock avalanche was dominated by the landslide volume, total relief and channel gradient. The performance of both models was then tested with an independent validation dataset of eight rock avalanches that were induced by the 2008 Wenchuan earthquake, the Ms 7.0 Lushan earthquake and heavy rainfall in 2013, showing acceptable good prediction results. Therefore, the travel-distance prediction models for channelized rock avalanches constructed in this study are applicable and reliable for predicting the runout of similar rock avalanches in other regions.

  13. The earthquake prediction experiment at Parkfield, California

    USGS Publications Warehouse

    Roeloffs, E.; Langbein, J.

    1994-01-01

    Since 1985, a focused earthquake prediction experiment has been in progress along the San Andreas fault near the town of Parkfield in central California. Parkfield has experienced six moderate earthquakes since 1857 at average intervals of 22 years, the most recent a magnitude 6 event in 1966. The probability of another moderate earthquake soon appears high, but studies assigning it a 95% chance of occurring before 1993 now appear to have been oversimplified. The identification of a Parkfield fault "segment" was initially based on geometric features in the surface trace of the San Andreas fault, but more recent microearthquake studies have demonstrated that those features do not extend to seismogenic depths. On the other hand, geodetic measurements are consistent with the existence of a "locked" patch on the fault beneath Parkfield that has presently accumulated a slip deficit equal to the slip in the 1966 earthquake. A magnitude 4.7 earthquake in October 1992 brought the Parkfield experiment to its highest level of alert, with a 72-hour public warning that there was a 37% chance of a magnitude 6 event. However, this warning proved to be a false alarm. Most data collected at Parkfield indicate that strain is accumulating at a constant rate on this part of the San Andreas fault, but some interesting departures from this behavior have been recorded. Here we outline the scientific arguments bearing on when the next Parkfield earthquake is likely to occur and summarize geophysical observations to date.

  14. The Parkfield earthquake prediction of October 1992; the emergency services response

    USGS Publications Warehouse

    Andrews, R.

    1992-01-01

    The science of earthquake prediction is interesting and worthy of support. In many respects the ultimate payoff of earthquake prediction or earthquake forecasting is how the information can be used to enhance public safety and public preparedness. This is a particularly important issue here in California where we have such a high level of seismic risk historically, and currently, as a consequence of activity in 1989 in the San Francisco Bay Area, in Humboldt County in April of this year (1992), and in southern California in the Landers-Big Bear area in late June of this year (1992). We are currently very concerned about the possibility of a major earthquake, one or more, happening close to one of our metropolitan areas. Within that context, the Parkfield experiment becomes very important. 

  15. Earthquake prediction rumors can help in building earthquake awareness: the case of May the 11th 2011 in Rome (Italy)

    NASA Astrophysics Data System (ADS)

    Amato, A.; Arcoraci, L.; Casarotti, E.; Cultrera, G.; Di Stefano, R.; Margheriti, L.; Nostro, C.; Selvaggi, G.; May-11 Team

    2012-04-01

    Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquake prediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquake prediction credibility. Given the echo of this earthquake prediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

  16. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  17. Landscape scale prediction of earthquake-induced landsliding based on seismological and geomorphological parameters.

    NASA Astrophysics Data System (ADS)

    Marc, O.; Hovius, N.; Meunier, P.; Rault, C.

    2017-12-01

    In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and

  18. Space-Time Earthquake Prediction: The Error Diagrams

    NASA Astrophysics Data System (ADS)

    Molchan, G.

    2010-08-01

    The quality of earthquake prediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets { n, τ w } in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the ( n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the ( n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of ( n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods.

  19. The Fusion of Financial Analysis and Seismology: Statistical Methods from Financial Market Analysis Applied to Earthquake Data

    NASA Astrophysics Data System (ADS)

    Ohyanagi, S.; Dileonardo, C.

    2013-12-01

    As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.

  20. Can We Predict Earthquakes?

    ScienceCinema

    Johnson, Paul

    2018-01-16

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  1. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  2. Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2013-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

  3. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  4. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... updates on past topics of discussion, including work with social and behavioral scientists on improving... probabilities; USGS collaborative work with the Collaboratory for Study of Earthquake Predictability (CSEP...

  5. Earthquake prediction in California using regression algorithms and cloud-based big data infrastructure

    NASA Astrophysics Data System (ADS)

    Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.

    2018-06-01

    Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.

  6. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  7. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    USGS Publications Warehouse

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  8. Statistical short-term earthquake prediction.

    PubMed

    Kagan, Y Y; Knopoff, L

    1987-06-19

    A statistical procedure, derived from a theoretical model of fracture growth, is used to identify a foreshock sequence while it is in progress. As a predictor, the procedure reduces the average uncertainty in the rate of occurrence for a future strong earthquake by a factor of more than 1000 when compared with the Poisson rate of occurrence. About one-third of all main shocks with local magnitude greater than or equal to 4.0 in central California can be predicted in this way, starting from a 7-year database that has a lower magnitude cut off of 1.5. The time scale of such predictions is of the order of a few hours to a few days for foreshocks in the magnitude range from 2.0 to 5.0.

  9. Real-time earthquake monitoring using a search engine method.

    PubMed

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  10. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  11. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    NASA Astrophysics Data System (ADS)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  12. Prediction of Earthquakes by Lunar Cicles

    NASA Astrophysics Data System (ADS)

    Rodriguez, G.

    2007-05-01

    Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme

  13. New predictive equations for Arias intensity from crustal earthquakes in New Zealand

    NASA Astrophysics Data System (ADS)

    Stafford, Peter J.; Berrill, John B.; Pettinga, Jarg R.

    2009-01-01

    Arias Intensity (Arias, MIT Press, Cambridge MA, pp 438-483, 1970) is an important measure of the strength of a ground motion, as it is able to simultaneously reflect multiple characteristics of the motion in question. Recently, the effectiveness of Arias Intensity as a predictor of the likelihood of damage to short-period structures has been demonstrated, reinforcing the utility of Arias Intensity for use in both structural and geotechnical applications. In light of this utility, Arias Intensity has begun to be considered as a ground-motion measure suitable for use in probabilistic seismic hazard analysis (PSHA) and earthquake loss estimation. It is therefore timely to develop predictive equations for this ground-motion measure. In this study, a suite of four predictive equations, each using a different functional form, is derived for the prediction of Arias Intensity from crustal earthquakes in New Zealand. The provision of a suite of models is included to allow for epistemic uncertainty to be considered within a PSHA framework. Coefficients are presented for four different horizontal-component definitions for each of the four models. The ground-motion dataset for which the equations are derived include records from New Zealand crustal earthquakes as well as near-field records from worldwide crustal earthquakes. The predictive equations may be used to estimate Arias Intensity for moment magnitudes between 5.1 and 7.5 and for distances (both rjb and rrup) up to 300 km.

  14. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

  15. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  16. Predicting Posttraumatic Stress Symptom Prevalence and Local Distribution after an Earthquake with Scarce Data.

    PubMed

    Dussaillant, Francisca; Apablaza, Mauricio

    2017-08-01

    After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

  17. Real-time earthquake monitoring using a search engine method

    PubMed Central

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-01-01

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861

  18. Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults.

    PubMed

    McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H

    2005-03-24

    East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.

  19. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2016-12-01

    Sequence of the 2016 Kumamoto earthquakes (Mw6.2 on April 14, Mw7.0 on April 16, and many aftershocks) caused a devastating damage at Kumamoto and Oita prefectures, Japan. During the Mw7.0 event, just after the direct S waves passing the central Oita, another M6 class event occurred there more than 80 km apart from the Mw7.0 event. The M6 event is interpreted as an induced earthquake; but it brought stronger shaking at the central Oita than that from the Mw7.0 event. We will discuss the induced earthquake from viewpoint of Earthquake Early Warning. In terms of ground shaking such as PGA and PGV, the Mw7.0 event is much smaller than those of the M6 induced earthquake at the central Oita (for example, 1/8 smaller at OIT009 station for PGA), and then it is easy to discriminate two events. However, PGD of the Mw7.0 is larger than that of the induced earthquake, and its appearance is just before the occurrence of the induced earthquake. It is quite difficult to recognize the induced earthquake from displacement waveforms only, because the displacement is strongly contaminated by that of the preceding Mw7.0 event. In many methods of EEW (including current JMA EEW system), magnitude is used for prediction of ground shaking through Ground Motion Prediction Equation (GMPE) and the magnitude is often estimated from displacement. However, displacement magnitude does not necessarily mean the best one for prediction of ground shaking, such as PGA and PGV. In case of the induced earthquake during the Kumamoto earthquake, displacement magnitude could not be estimated because of the strong contamination. Actually JMA EEW system could not recognize the induced earthquake. One of the important lessons we learned from eight years' operation of EEW is an issue of the multiple simultaneous earthquakes, such as aftershocks of the 2011 Mw9.0 Tohoku earthquake. Based on this lesson, we have proposed enhancement of real-time monitor of ground shaking itself instead of rapid estimation of

  20. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    whether the districts are more urbanized or not. As the present researches are concerned, there were not a good and reliable relationship between the mortality and the characteristics of ground motions. We propose the concept of Equal Population Gaps to resolve the influence of mortality in a rural or urban district and decision of the weighting function to each district. The relationship between PGA Index and the mortality determined in this study can be expressed as:\\[M=28.9/[1+exp{(1.67-0.0029 \\times PGA)}] \\] Here M is mortality in %, and PGA is PGA Index in gals. The corresponding curve matches the data reasonably well, with R2=0.91. We process the estimation for districts in different scales to verify the feasibility of the method. The mortality-based on PGA Index is particularly useful in real-time application for death tolls prediction and assessment--a piece of information most critical for post earthquake emergency response operation.

  1. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  2. Magnitude Estimation for Large Earthquakes from Borehole Recordings

    NASA Astrophysics Data System (ADS)

    Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.

    2012-12-01

    We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).

  3. Feasibility study of short-term earthquake prediction using ionospheric anomalies immediately before large earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2017-12-01

    We showed that positive and negative electron density anomalies emerge above the fault immediately before they rupture, 40/20/10 minutes before Mw9/8/7 earthquakes (Heki, 2011 GRL; Heki and Enomoto, 2013 JGR; He and Heki 2017 JGR). These signals are stronger for earthquake with larger Mw and under higher background vertical TEC (total electron conetent) (Heki and Enomoto, 2015 JGR). The epicenter, the positive and the negative anomalies align along the local geomagnetic field (He and Heki, 2016 GRL), suggesting electric fields within ionosphere are responsible for making the anomalies (Kuo et al., 2014 JGR; Kelley et al., 2017 JGR). Here we suppose the next Nankai Trough earthquake that may occur within a few tens of years in Southwest Japan, and will discuss if we can recognize its preseismic signatures in TEC by real-time observations with GNSS.During high geomagnetic activities, large-scale traveling ionospheric disturbances (LSTID) often propagate from auroral ovals toward mid-latitude regions, and leave similar signatures to preseismic anomalies. This is a main obstacle to use preseismic TEC changes for practical short-term earthquake prediction. In this presentation, we show that the same anomalies appeared 40 minutes before the mainshock above northern Australia, the geomagnetically conjugate point of the 2011 Tohoku-oki earthquake epicenter. This not only demonstrates that electric fields play a role in making the preseismic TEC anomalies, but also offers a possibility to discriminate preseismic anomalies from those caused by LSTID. By monitoring TEC in the conjugate areas in the two hemisphere, we can recognize anomalies with simultaneous onset as those caused by within-ionosphere electric fields (e.g. preseismic anomalies, night-time MSTID) and anomalies without simultaneous onset as gravity-wave origin disturbances (e.g. LSTID, daytime MSTID).

  4. Thermal Infrared Anomalies of Several Strong Earthquakes

    PubMed Central

    Wei, Congxin; Guo, Xiao; Qin, Manzhong

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of “time-frequency relative power spectrum.” (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  5. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  6. Volunteers in the earthquake hazard reduction program

    USGS Publications Warehouse

    Ward, P.L.

    1978-01-01

    With this in mind, I organized a small workshop for approximately 30 people on February 2 and 3, 1978, in Menlo Park, Calif. the purpose of the meeting was to discuss methods of involving volunteers in a meaningful way in earthquake research and in educating the public about earthquake hazards. The emphasis was on earthquake prediction research, but the discussions covered the whole earthquake hazard reduction program. Representatives attended from the earthquake research community, from groups doing socioeconomic research on earthquake matters, and from a wide variety of organizations who might sponsor volunteers. 

  7. Real time numerical shake prediction incorporating attenuation structure: a case for the 2016 Kumamoto Earthquake

    NASA Astrophysics Data System (ADS)

    Ogiso, M.; Hoshiba, M.; Shito, A.; Matsumoto, S.

    2016-12-01

    Needless to say, heterogeneous attenuation structure is important for ground motion prediction, including earthquake early warning, that is, real time ground motion prediction. Hoshiba and Ogiso (2015, AGU Fall meeting) showed that the heterogeneous attenuation and scattering structure will lead to earlier and more accurate ground motion prediction in the numerical shake prediction scheme proposed by Hoshiba and Aoki (2015, BSSA). Hoshiba and Ogiso (2015) used assumed heterogeneous structure, and we discuss the effect of them in the case of 2016 Kumamoto Earthquake, using heterogeneous structure estimated by actual observation data. We conducted Multiple Lapse Time Window Analysis (Hoshiba, 1993, JGR) to the seismic stations located on western part of Japan to estimate heterogeneous attenuation and scattering structure. The characteristics are similar to the previous work of Carcole and Sato (2010, GJI), e.g. strong intrinsic and scattering attenuation around the volcanoes located on the central part of Kyushu, and relatively weak heterogeneities in the other area. Real time ground motion prediction simulation for the 2016 Kumamoto Earthquake was conducted using the numerical shake prediction scheme with 474 strong ground motion stations. Comparing the snapshot of predicted and observed wavefield showed a tendency for underprediction around the volcanic area in spite of the heterogeneous structure. These facts indicate the necessity of improving the heterogeneous structure for the numerical shake prediction scheme.In this study, we used the waveforms of Hi-net, K-NET, KiK-net stations operated by the NIED for estimating structure and conducting ground motion prediction simulation. Part of this study was supported by the Earthquake Research Institute, the University of Tokyo cooperative research program and JSPS KAKENHI Grant Number 25282114.

  8. Predicting the spatial extent of liquefaction from geospatial and earthquake specific parameters

    USGS Publications Warehouse

    Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.

    2014-01-01

    The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.

  9. Earthquake prediction using extinct monogenetic volcanoes: A possible new research strategy

    NASA Astrophysics Data System (ADS)

    Szakács, Alexandru

    2011-04-01

    Volcanoes are extremely effective transmitters of matter, energy and information from the deep Earth towards its surface. Their capacities as information carriers are far to be fully exploited so far. Volcanic conduits can be viewed in general as rod-like or sheet-like vertical features with relatively homogenous composition and structure crosscutting geological structures of far more complexity and compositional heterogeneity. Information-carrying signals such as earthquake precursor signals originating deep below the Earth surface are transmitted with much less loss of information through homogenous vertically extended structures than through the horizontally segmented heterogeneous lithosphere or crust. Volcanic conduits can thus be viewed as upside-down "antennas" or waveguides which can be used as privileged pathways of any possible earthquake precursor signal. In particular, conduits of monogenetic volcanoes are promising transmitters of deep Earth information to be received and decoded at surface monitoring stations because the expected more homogenous nature of their rock-fill as compared to polygenetic volcanoes. Among monogenetic volcanoes those with dominantly effusive activity appear as the best candidates for privileged earthquake monitoring sites. In more details, effusive monogenetic volcanic conduits filled with rocks of primitive parental magma composition indicating direct ascent from sub-lithospheric magma-generating areas are the most suitable. Further selection criteria may include age of the volcanism considered and the presence of mantle xenoliths in surface volcanic products indicating direct and straightforward link between the deep lithospheric mantle and surface through the conduit. Innovative earthquake prediction research strategies can be based and developed on these grounds by considering conduits of selected extinct monogenetic volcanoes and deep trans-crustal fractures as privileged emplacement sites of seismic monitoring stations

  10. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  11. On a report that the 2012 M 6.0 earthquake in Italy was predicted after seeing an unusual cloud formation

    USGS Publications Warehouse

    Thomas, J.N.; Masci, F; Love, Jeffrey J.

    2015-01-01

    Several recently published reports have suggested that semi-stationary linear-cloud formations might be causally precursory to earthquakes. We examine the report of Guangmeng and Jie (2013), who claim to have predicted the 2012 M 6.0 earthquake in the Po Valley of northern Italy after seeing a satellite photograph (a digital image) showing a linear-cloud formation over the eastern Apennine Mountains of central Italy. From inspection of 4 years of satellite images we find numerous examples of linear-cloud formations over Italy. A simple test shows no obvious statistical relationship between the occurrence of these cloud formations and earthquakes that occurred in and around Italy. All of the linear-cloud formations we have identified in satellite images, including that which Guangmeng and Jie (2013) claim to have used to predict the 2012 earthquake, appear to be orographic – formed by the interaction of moisture-laden wind flowing over mountains. Guangmeng and Jie (2013) have not clearly stated how linear-cloud formations can be used to predict the size, location, and time of an earthquake, and they have not published an account of all of their predictions (including any unsuccessful predictions). We are skeptical of the validity of the claim by Guangmeng and Jie (2013) that they have managed to predict any earthquakes.

  12. Shaking table test and dynamic response prediction on an earthquake-damaged RC building

    NASA Astrophysics Data System (ADS)

    Xianguo, Ye; Jiaru, Qian; Kangning, Li

    2004-12-01

    This paper presents the results from shaking table tests of a one-tenth-scale reinforced concrete (RC) building model. The test model is a protype of a building that was seriously damaged during the 1985 Mexico earthquake. The input ground excitation used during the test was from the records obtained near the site of the prototype building during the 1985 and 1995 Mexico earthquakes. The tests showed that the damage pattern of the test model agreed well with that of the prototype building. Analytical prediction of earthquake response has been conducted for the prototype building using a sophisticated 3-D frame model. The input motion used for the dynamic analysis was the shaking table test measurements with similarity transformation. The comparison of the analytical results and the shaking table test results indicates that the response of the RC building to minor and the moderate earthquakes can be predicated well. However, there is difference between the predication and the actual response to the major earthquake.

  13. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  14. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  15. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  16. First Results of the Regional Earthquake Likelihood Models Experiment

    NASA Astrophysics Data System (ADS)

    Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.

    2010-08-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

  17. Prospects for earthquake prediction and control

    USGS Publications Warehouse

    Healy, J.H.; Lee, W.H.K.; Pakiser, L.C.; Raleigh, C.B.; Wood, M.D.

    1972-01-01

    The San Andreas fault is viewed, according to the concepts of seafloor spreading and plate tectonics, as a transform fault that separates the Pacific and North American plates and along which relative movements of 2 to 6 cm/year have been taking place. The resulting strain can be released by creep, by earthquakes of moderate size, or (as near San Francisco and Los Angeles) by great earthquakes. Microearthquakes, as mapped by a dense seismograph network in central California, generally coincide with zones of the San Andreas fault system that are creeping. Microearthquakes are few and scattered in zones where elastic energy is being stored. Changes in the rate of strain, as recorded by tiltmeter arrays, have been observed before several earthquakes of about magnitude 4. Changes in fluid pressure may control timing of seismic activity and make it possible to control natural earthquakes by controlling variations in fluid pressure in fault zones. An experiment in earthquake control is underway at the Rangely oil field in Colorado, where the rates of fluid injection and withdrawal in experimental wells are being controlled. ?? 1972.

  18. Current affairs in earthquake prediction in Japan

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    2015-12-01

    As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.

  19. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    NASA Astrophysics Data System (ADS)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  20. Comparison of Ground Motion Prediction Equations (GMPE) for Chile and Canada With Recent Chilean Megathust Earthquakes

    NASA Astrophysics Data System (ADS)

    Herrera, C.; Cassidy, J. F.; Dosso, S. E.

    2017-12-01

    The ground shaking assessment allows quantifying the hazards associated with the occurrence of earthquakes. Chile and western Canada are two areas that have experienced, and are susceptible to imminent large crustal, in-slab and megathrust earthquakes that can affect the population significantly. In this context, we compare the current GMPEs used in the 2015 National Building Code of Canada and the most recent GMPEs calculated for Chile, with observed accelerations generated by four recent Chilean megathrust earthquakes (MW ≥ 7.7) that have occurred during the past decade, which is essential to quantify how well current models predict observations of major events.We collected the 3-component waveform data of more than 90 stations from the Centro Sismologico Nacional and the Universidad de Chile, and processed them by removing the trend and applying a band-pass filter. Then, for each station, we obtained the Peak Ground Acceleration (PGA), and by using a damped response spectra, we calculated the Pseudo Spectral Acceleration (PSA). Finally, we compared those observations with the most recent Chilean and Canadian GMPEs. Given the lack of geotechnical information for most of the Chilean stations, we also used a new method to obtain the VS30 by inverting the H/V ratios using a trans-dimensional Bayesian inversion, which allows us to improve the correction of observations according to soil conditions.As expected, our results show a good fit between observations and the Chilean GMPEs, but we observe that although the shape of the Canadian GMPEs is coherent with the distribution of observations, in general they under predict the observations for PGA and PSA at shorter periods for most of the considered earthquakes. An example of this can be seen in the attached figure for the case of the 2014 Iquique earthquake.These results present important implications related to the hazards associated to large earthquakes, especially for western Canada, where the probability of a

  1. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  2. Seismo-Lineament Analysis Method (SLAM) Applied to the South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Worrell, V. E.; Cronin, V. S.

    2014-12-01

    We used the seismo-lineament analysis method (SLAM; http://bearspace.baylor.edu/Vince_Cronin/www/SLAM/) to "predict" the location of the fault that produced the M 6.0 South Napa earthquake of 24 August 2014, using hypocenter and focal mechanism data from NCEDC (http://www.ncedc.org/ncedc/catalog-search.html) and a digital elevation model from the USGS National Elevation Dataset (http://viewer.nationalmap.gov/viewer/). The ground-surface trace of the causative fault (i.e., the Browns Valley strand of the West Napa fault zone; Bryant, 2000, 1982) and virtually all of the ground-rupture sites reported by the USGS and California Geological Survey (http://www.eqclearinghouse.org/2014-08-24-south-napa/) were located within the north-striking seismo-lineament. We also used moment tensors published online by the USGS and GCMT (http://comcat.cr.usgs.gov/earthquakes/eventpage/nc72282711#scientific_moment-tensor) as inputs to SLAM and found that their northwest-striking seismo-lineaments correlated spatially with the causative fault. We concluded that SLAM could have been used as soon as these mechanism solutions were available to help direct the search for the trace of the causative fault and possible rupture-related damage. We then considered whether the seismogenic fault could have been identified using SLAM prior to the 24 August event, based on the focal mechanisms of smaller prior earthquakes reported by the NCEDC or ISC (http://www.isc.ac.uk). Seismo-lineaments from three M~3.5 events from 1990 and 2012, located in the Vallejo-Crockett area, correlate spatially with the Napa County Airport strand of the West Napa fault and extend along strike toward the Browns Valley strand (Bryant, 2000, 1982). Hence, we might have used focal mechanisms from smaller earthquakes to establish that the West Napa fault is likely seismogenic prior to the South Napa earthquake. Early recognition that a fault with a mapped ground-surface trace is seismogenic, based on smaller earthquakes

  3. CSEP-Japan: The Japanese node of the collaboratory for the study of earthquake predictability

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2011-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project of earthquake predictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity

  4. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  5. Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.

    2018-02-01

    Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.

  6. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  7. Living with earthquakes - development and usage of earthquake-resistant construction methods in European and Asian Antiquity

    NASA Astrophysics Data System (ADS)

    Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes

    2010-05-01

    Earthquakes are among the most horrible events of nature due to unexpected occurrence, for which no spiritual means are available for protection. The only way of preserving life and property is applying earthquake-resistant construction methods. Ancient Greek architects of public buildings applied steel clamps embedded in lead casing to hold together columns and masonry walls during frequent earthquakes in the Aegean region. Elastic steel provided strength, while plastic lead casing absorbed minor shifts of blocks without fracturing rigid stone. Romans invented concrete and built all sizes of buildings as a single, unflexible unit. Masonry surrounding and decorating concrete core of the wall did not bear load. Concrete resisted minor shaking, yielding only to forces higher than fracture limits. Roman building traditions survived the Dark Ages and 12th century Crusader castles erected in earthquake-prone Syria survive until today in reasonably good condition. Concrete and steel clamping persisted side-by-side in the Roman Empire. Concrete was used for cheap construction as compared to building of masonry. Applying lead-encased steel increased costs, and was avoided whenever possible. Columns of the various forums in Italian Pompeii mostly lack steel fittings despite situated in well-known earthquake-prone area. Whether frequent recurrence of earthquakes in the Naples region was known to inhabitants of Pompeii might be a matter of debate. Seemingly the shock of the AD 62 earthquake was not enough to apply well-known protective engineering methods throughout the reconstruction of the city before the AD 79 volcanic catastrophe. An independent engineering tradition developed on the island of Java (Indonesia). The mortar-less construction technique of 8-9th century Hindu masonry shrines around Yogyakarta would allow scattering of blocks during earthquakes. To prevent dilapidation an intricate mortise-and-tenon system was carved into adjacent faces of blocks. Only the

  8. Ionospheric Method of Detecting Tsunami-Generating Earthquakes.

    ERIC Educational Resources Information Center

    Najita, Kazutoshi; Yuen, Paul C.

    1978-01-01

    Reviews the earthquake phenomenon and its possible relation to ionospheric disturbances. Discusses the basic physical principles involved and the methods upon which instrumentation is being developed for possible use in a tsunami disaster warning system. (GA)

  9. Predicted Attenuation Relation and Observed Ground Motion of Gorkha Nepal Earthquake of 25 April 2015

    NASA Astrophysics Data System (ADS)

    Singh, R. P.; Ahmad, R.

    2015-12-01

    A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

  10. FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.

    USGS Publications Warehouse

    Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.

    1985-01-01

    The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.

  11. A landslide susceptibility prediction on a sample slope in Kathmandu Nepal associated with the 2015's Gorkha Earthquake

    NASA Astrophysics Data System (ADS)

    Kubota, Tetsuya; Prasad Paudel, Prem

    2016-04-01

    In 2013, some landslides induced by heavy rainfalls occurred in southern part of Kathmandu, Nepal which is located southern suburb of Kathmandu, the capital. These landslide slopes hit by the strong Gorkha Earthquake in April 2015 and seemed to destabilize again. Hereby, to clarify their susceptibility of landslide in the earthquake, one of these landslide slopes was analyzed its slope stability by CSSDP (Critical Slip Surface analysis by Dynamic Programming based on limit equilibrium method, especially Janbu method) against slope failure with various seismic acceleration observed around Kathmandu in the Gorkha Earthquake. The CSSDP can detect the landslide slip surface which has minimum Fs (factor of safety) automatically using dynamic programming theory. The geology in this area mainly consists of fragile schist and it is prone to landslide occurrence. Field survey was conducted to obtain topological data such as ground surface and slip surface cross section. Soil parameters obtained by geotechnical tests with field sampling were applied. Consequently, the slope has distinctive characteristics followings in terms of slope stability: (1) With heavy rainfall, it collapsed and had a factor of safety Fs <1.0 (0.654 or more). (2) With seismic acceleration of 0.15G (147gal) observed around Kathmandu, it has Fs=1.34. (3) With possible local seismic acceleration of 0.35G (343gal) estimated at Kathmandu, it has Fs=0.989. If it were very shallow landslide and covered with cedars, it could have Fs =1.055 due to root reinforcement effect to the soil strength. (4) Without seismic acceleration and with no rainfall condition, it has Fs=1.75. These results can explain the real landslide occurrence in this area with the maximum seismic acceleration estimated as 0.15G in the vicinity of Kathmandu by the Gorkha Earthquake. Therefore, these results indicate landslide susceptibility of the slopes in this area with strong earthquake. In this situation, it is possible to predict

  12. Predicted liquefaction of East Bay fills during a repeat of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Blair, J.L.; Noce, T.E.; Bennett, M.J.

    2006-01-01

    Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M ??? 7.8 earthquake on the northern San Andreas Fault. ?? 2006, Earthquake Engineering Research Institute.

  13. The smart cluster method. Adaptive earthquake cluster identification and analysis in strong seismic regions

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann

    2017-07-01

    Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.

  14. Relocalizing a historical earthquake using recent methods: The 10 November 1935 Earthquake near Montserrat, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Niemz, P.; Amorèse, D.

    2016-03-01

    This study investigates the hypothesis of Feuillet et al. (2011) that the hypocenter of the seismic event on November 10, 1935 near Montserrat, Lesser Antilles (MS 6 1/4) (Gutenberg and Richter, 1954) was mislocated by other authors and is actually located in the Montserrat-Havers fault zone. While this proposal was based both on a Ground Motion Prediction Equation and on the assumption that earthquakes in this region are bound to prominent fault systems, our study relies on earthquake localization methods using arrival times of the International Seismological Summary (ISS). Results of our methodology suggest that the hypocenter was really located at 16.90° N, 62.53° W. This solution is about 25 km north-west of the location proposed by Feuillet et al. (2011) within the Redonda fault system, northward of the Montserrat-Havers fault zone. As depth phases that contribute valuable insights to the focal depth are not included in the ISS data set and the reassociation of these phases is difficult, the error in depth is high. Taking into account tectonic constraints and the vertical extend of NonLinLoc's uncertainty area of the preferred solution we assume that the focus is most probably in the lower crust between 20 km and the Moho. Our approach shows that the information of the ISS can lead to a reliable solution even without an exhaustive search for seismograms and station bulletins. This is encouraging for a better assessment of seismic and tsunami hazard in the Caribbean, Mexico, South and Central America, where many moderate to large earthquakes occurred in the first half of the 20th century. The limitations during this early phase of seismology which complicate such relocations are described in detail in this study.

  15. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  16. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  17. Comparing methods for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin

    2017-04-01

    There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).

  18. Investigation of the relationship between ionospheric foF2 and earthquakes

    NASA Astrophysics Data System (ADS)

    Karaboga, Tuba; Canyilmaz, Murat; Ozcan, Osman

    2018-04-01

    Variations of the ionospheric F2 region critical frequency (foF2) have been investigated statistically before earthquakes during 1980-2008 periods in Japan area. Ionosonde data was taken from Kokubunji station which is in the earthquake preparation zone for all earthquakes. Standard Deviations and Inter-Quartile Range methods are applied to the foF2 data. It is observed that there are anomalous variations in foF2 before earthquakes. These variations can be regarded as ionospheric precursors and may be used for earthquake prediction.

  19. Nitsche Extended Finite Element Methods for Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Coon, Ethan T.

    Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack

  20. Predictive factors of depression symptoms among adolescents in the 18-month follow-up after Wenchuan earthquake in China.

    PubMed

    Chui, Cheryl H K; Ran, Mao-Sheng; Li, Rong-Hui; Fan, Mei; Zhang, Zhen; Li, Yuan-Hao; Ou, Guo Jing; Jiang, Zhe; Tong, Yu-Zhen; Fang, Ding-Zhi

    2017-02-01

    It is unclear about the change and risk factors of depression among adolescent survivors after earthquake. This study aimed to explore the change of depression, and identify the predictive factors of depression among adolescent survivors after the 2008 Wenchuan earthquake in China. The depression among high school students at 6, 12 and 18 months after the Wenchuan earthquake were investigated. The Beck Depression Inventory (BDI) was used in this study to assess the severity of depression. Subjects included 548 student survivors in an affected high school. The rates of depression among the adolescent survivors at 6-, 12- and 18-month after the earthquake were 27.3%, 42.9% and 33.3%, respectively, for males, and 42.9%, 61.9% and 53.4%, respectively, for females. Depression symptoms, trauma-related self-injury, suicidal ideation and PTSD symptoms at the 6-month follow-up were significant predictive factors for depression at the 18-month time interval following the earthquake. This study highlights the need for considering disaster-related psychological sequela and risk factors of depression symptoms in the planning and implementation of mental health services. Long-term mental and psychological supports for victims of natural disasters are imperative.

  1. High Attenuation Rate for Shallow, Small Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe

    2017-09-01

    We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.

  2. Earthquakes: hydrogeochemical precursors

    USGS Publications Warehouse

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  3. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  4. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  5. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  6. Frequency Spectrum Method-Based Stress Analysis for Oil Pipelines in Earthquake Disaster Areas

    PubMed Central

    Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao

    2015-01-01

    When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline. PMID:25692790

  7. Frequency spectrum method-based stress analysis for oil pipelines in earthquake disaster areas.

    PubMed

    Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao

    2015-01-01

    When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline.

  8. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  9. In-situ fluid-pressure measurements for earthquake prediction: An example from a deep well at Hi Vista, California

    USGS Publications Warehouse

    Healy, J.H.; Urban, T.C.

    1985-01-01

    Short-term earthquake prediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present. ?? 1985 Birkha??user Verlag.

  10. Real data assimilation for optimization of frictional parameters and prediction of afterslip in the 2003 Tokachi-oki earthquake inferred from slip velocity by an adjoint method

    NASA Astrophysics Data System (ADS)

    Kano, Masayuki; Miyazaki, Shin'ichi; Ishikawa, Yoichi; Hiyoshi, Yoshihisa; Ito, Kosuke; Hirahara, Kazuro

    2015-10-01

    Data assimilation is a technique that optimizes the parameters used in a numerical model with a constraint of model dynamics achieving the better fit to observations. Optimized parameters can be utilized for the subsequent prediction with a numerical model and predicted physical variables are presumably closer to observations that will be available in the future, at least, comparing to those obtained without the optimization through data assimilation. In this work, an adjoint data assimilation system is developed for optimizing a relatively large number of spatially inhomogeneous frictional parameters during the afterslip period in which the physical constraints are a quasi-dynamic equation of motion and a laboratory derived rate and state dependent friction law that describe the temporal evolution of slip velocity at subduction zones. The observed variable is estimated slip velocity on the plate interface. Before applying this method to the real data assimilation for the afterslip of the 2003 Tokachi-oki earthquake, a synthetic data assimilation experiment is conducted to examine the feasibility of optimizing the frictional parameters in the afterslip area. It is confirmed that the current system is capable of optimizing the frictional parameters A-B, A and L by adopting the physical constraint based on a numerical model if observations capture the acceleration and decaying phases of slip on the plate interface. On the other hand, it is unlikely to constrain the frictional parameters in the region where the amplitude of afterslip is less than 1.0 cm d-1. Next, real data assimilation for the 2003 Tokachi-oki earthquake is conducted to incorporate slip velocity data inferred from time dependent inversion of Global Navigation Satellite System time-series. The optimized values of A-B, A and L are O(10 kPa), O(102 kPa) and O(10 mm), respectively. The optimized frictional parameters yield the better fit to the observations and the better prediction skill of slip

  11. Measuring the effectiveness of earthquake forecasting in insurance strategies

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Muir-Wood, R.

    2009-04-01

    Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.

  12. Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization

    NASA Astrophysics Data System (ADS)

    Lee, Kyungbook; Song, Seok Goo

    2017-09-01

    Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.

  13. Predicting earthquake effects—Learning from Northridge and Loma Prieta

    USGS Publications Warehouse

    Holzer, Thomas L.

    1994-01-01

    The continental United States has been rocked by two particularly damaging earthquakes in the last 4.5 years, Loma Prieta in northern California in 1989 and Northridge in southern California in 1994. Combined losses from these two earthquakes approached $30 billion. Approximately half these losses were reimbursed by the federal government. Because large earthquakes typically overwhelm state resources and place unplanned burdens on the federal government, it is important to learn from these earthquakes how to reduce future losses. My purpose here is to explore a potential implication of the Northridge and Loma Prieta earthquakes for hazard-mitigation strategies: earth scientists should increase their efforts to map hazardous areas within urban regions. 

  14. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  15. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Meunier, Patrick; Hovius, Niels

    2017-07-01

    We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

  16. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  17. GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network

    NASA Astrophysics Data System (ADS)

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-09-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.

  18. Earthquake early warning using P-waves that appear after initial S-waves

    NASA Astrophysics Data System (ADS)

    Kodera, Y.

    2017-12-01

    As measures for underprediction for large earthquakes with finite faults and overprediction for multiple simultaneous earthquakes, Hoshiba (2013), Hoshiba and Aoki (2015), and Kodera et al. (2016) proposed earthquake early warning (EEW) methods that directly predict ground motion by computing the wave propagation of observed ground motion. These methods are expected to predict ground motion with a high accuracy even for complicated scenarios because these methods do not need source parameter estimation. On the other hand, there is room for improvement in their rapidity because they predict strong motion prediction mainly based on the observation of S-waves and do not explicitly use P-wave information available before the S-waves. In this research, we propose a real-time P-wave detector to incorporate P-wave information into these wavefield-estimation approaches. P-waves within a few seconds from the P-onsets are commonly used in many existing EEW methods. In addition, we focus on P-waves that may appear in the later part of seismic waves. Kurahashi and Irikura (2013) mentioned that P-waves radiated from strong motion generation areas (SMGAs) were recognizable after S-waves of the initial rupture point in the 2011 off the Pacific coast of Tohoku earthquake (Mw 9.0) (the Tohoku-oki earthquake). Detecting these P-waves would enhance the rapidity of prediction for the peak ground motion generated by SMGAs. We constructed a real-time P-wave detector that uses a polarity analysis. Using acceleration records in boreholes of KiK-net (band-pass filtered around 0.5-10 Hz with site amplification correction), the P-wave detector performed the principal component analysis with a sliding window of 4 s and calculated P-filter values (e.g. Ross and Ben-Zion, 2014). The application to the Tohoku-oki earthquake (Mw 9.0) showed that (1) peaks of P-filter that corresponded to SMGAs appeared in several stations located near SMGAs and (2) real-time seismic intensities (Kunugi et al

  19. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  20. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  1. Application of Geostatistical Methods and Machine Learning for spatio-temporal Earthquake Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Schaefer, A. M.; Daniell, J. E.; Wenzel, F.

    2014-12-01

    Earthquake clustering tends to be an increasingly important part of general earthquake research especially in terms of seismic hazard assessment and earthquake forecasting and prediction approaches. The distinct identification and definition of foreshocks, aftershocks, mainshocks and secondary mainshocks is taken into account using a point based spatio-temporal clustering algorithm originating from the field of classic machine learning. This can be further applied for declustering purposes to separate background seismicity from triggered seismicity. The results are interpreted and processed to assemble 3D-(x,y,t) earthquake clustering maps which are based on smoothed seismicity records in space and time. In addition, multi-dimensional Gaussian functions are used to capture clustering parameters for spatial distribution and dominant orientations. Clusters are further processed using methodologies originating from geostatistics, which have been mostly applied and developed in mining projects during the last decades. A 2.5D variogram analysis is applied to identify spatio-temporal homogeneity in terms of earthquake density and energy output. The results are mitigated using Kriging to provide an accurate mapping solution for clustering features. As a case study, seismic data of New Zealand and the United States is used, covering events since the 1950s, from which an earthquake cluster catalogue is assembled for most of the major events, including a detailed analysis of the Landers and Christchurch sequences.

  2. Middle school students' earthquake content and preparedness knowledge - A mixed method study

    NASA Astrophysics Data System (ADS)

    Henson, Harvey, Jr.

    The purpose of this study was to assess the effect of earthquake instruction on students' earthquake content and preparedness for earthquakes. This study used an innovative direct instruction on earthquake science content and concepts with an inquiry-based group activity on earthquake safety followed by an earthquake simulation and preparedness video to help middle school students understand and prepare for the regional seismic threat. A convenience sample of 384 sixth and seventh grade students at two small middle schools in southern Illinois was used in this study. Qualitative information was gathered using open-ended survey questions, classroom observations, and semi-structured interviews. Quantitative data were collected using a 21 item content questionnaire administered to test students' General Earthquake Knowledge, Local Earthquake Knowledge, and Earthquake Preparedness Knowledge before and after instruction. A pre-test and post-test survey Likert scale with 21 items was used to collect students' perceptions and attitudes. Qualitative data analysis included quantification of student responses to the open-ended questions and thematic analysis of observation notes and interview transcripts. Quantitative datasets were analyzed using descriptive and inferential statistical methods, including t tests to evaluate the differences in means scores between paired groups before and after interventions and one-way analysis of variance (ANOVA) to test for differences between mean scores of the comparison groups. Significant mean differences between groups were further examined using a Dunnett's C post hoc statistical analysis. Integration and interpretation of the qualitative and quantitative results of the study revealed a significant increase in general, local and preparedness earthquake knowledge among middle school students after the interventions. The findings specifically indicated that these students felt most aware and prepared for an earthquake after an

  3. Earthquake precursory events around epicenters and local active faults; the cases of two inland earthquakes in Iran

    NASA Astrophysics Data System (ADS)

    Valizadeh Alvan, H.; Mansor, S.; Haydari Azad, F.

    2012-12-01

    The possibility of earthquake prediction in the frame of several days to few minutes before its occurrence has stirred interest among researchers, recently. Scientists believe that the new theories and explanations of the mechanism of this natural phenomenon are trustable and can be the basis of future prediction efforts. During the last thirty years experimental researches resulted in some pre-earthquake events which are now recognized as confirmed warning signs (precursors) of past known earthquakes. With the advances in in-situ measurement devices and data analysis capabilities and the emergence of satellite-based data collectors, monitoring the earth's surface is now a regular work. Data providers are supplying researchers from all over the world with high quality and validated imagery and non-imagery data. Surface Latent Heat Flux (SLHF) or the amount of energy exchange in the form of water vapor between the earth's surface and atmosphere has been frequently reported as an earthquake precursor during the past years. The accumulated stress in the earth's crust during the preparation phase of earthquakes is said to be the main cause of temperature anomalies weeks to days before the main event and subsequent shakes. Chemical and physical interactions in the presence of underground water lead to higher water evaporation prior to inland earthquakes. On the other hand, the leak of Radon gas occurred as rocks break during earthquake preparation causes the formation of airborne ions and higher Air Temperature (AT) prior to main event. Although co-analysis of direct and indirect observation for precursory events is considered as a promising method for future successful earthquake prediction, without proper and thorough knowledge about the geological setting, atmospheric factors and geodynamics of the earthquake-prone regions we will not be able to identify anomalies due to seismic activity in the earth's crust. Active faulting is a key factor in identification of the

  4. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  5. Microearthquake networks and earthquake prediction

    USGS Publications Warehouse

    Lee, W.H.K.; Steward, S. W.

    1979-01-01

    A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.  

  6. Robust method to detect and locate local earthquakes by means of amplitude measurements.

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Brückl, Ewald

    2016-04-01

    In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic

  7. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    USGS Publications Warehouse

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  8. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  9. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  10. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  11. Dynamic stress changes during earthquake rupture

    USGS Publications Warehouse

    Day, S.M.; Yu, G.; Wald, D.J.

    1998-01-01

    We assess two competing dynamic interpretations that have been proposed for the short slip durations characteristic of kinematic earthquake models derived by inversion of earthquake waveform and geodetic data. The first interpretation would require a fault constitutive relationship in which rapid dynamic restrengthening of the fault surface occurs after passage of the rupture front, a hypothesized mechanical behavior that has been referred to as "self-healing." The second interpretation would require sufficient spatial heterogeneity of stress drop to permit rapid equilibration of elastic stresses with the residual dynamic friction level, a condition we refer to as "geometrical constraint." These interpretations imply contrasting predictions for the time dependence of the fault-plane shear stresses. We compare these predictions with dynamic shear stress changes for the 1992 Landers (M 7.3), 1994 Northridge (M 6.7), and 1995 Kobe (M 6.9) earthquakes. Stress changes are computed from kinematic slip models of these earthquakes, using a finite-difference method. For each event, static stress drop is highly variable spatially, with high stress-drop patches embedded in a background of low, and largely negative, stress drop. The time histories of stress change show predominantly monotonic stress change after passage of the rupture front, settling to a residual level, without significant evidence for dynamic restrengthening. The stress change at the rupture front is usually gradual rather than abrupt, probably reflecting the limited resolution inherent in the underlying kinematic inversions. On the basis of this analysis, as well as recent similar results obtained independently for the Kobe and Morgan Hill earthquakes, we conclude that, at the present time, the self-healing hypothesis is unnecessary to explain earthquake kinematics.

  12. Understanding Earthquake Hazard & Disaster in Himalaya - A Perspective on Earthquake Forecast in Himalayan Region of South Central Tibet

    NASA Astrophysics Data System (ADS)

    Shanker, D.; Paudyal, ,; Singh, H.

    2010-12-01

    It is not only the basic understanding of the phenomenon of earthquake, its resistance offered by the designed structure, but the understanding of the socio-economic factors, engineering properties of the indigenous materials, local skill and technology transfer models are also of vital importance. It is important that the engineering aspects of mitigation should be made a part of public policy documents. Earthquakes, therefore, are and were thought of as one of the worst enemies of mankind. Due to the very nature of release of energy, damage is evident which, however, will not culminate in a disaster unless it strikes a populated area. The word mitigation may be defined as the reduction in severity of something. The Earthquake disaster mitigation, therefore, implies that such measures may be taken which help reduce severity of damage caused by earthquake to life, property and environment. While “earthquake disaster mitigation” usually refers primarily to interventions to strengthen the built environment, and “earthquake protection” is now considered to include human, social and administrative aspects of reducing earthquake effects. It should, however, be noted that reduction of earthquake hazards through prediction is considered to be the one of the effective measures, and much effort is spent on prediction strategies. While earthquake prediction does not guarantee safety and even if predicted correctly the damage to life and property on such a large scale warrants the use of other aspects of mitigation. While earthquake prediction may be of some help, mitigation remains the main focus of attention of the civil society. Present study suggests that anomalous seismic activity/ earthquake swarm existed prior to the medium size earthquakes in the Nepal Himalaya. The mainshocks were preceded by the quiescence period which is an indication for the occurrence of future seismic activity. In all the cases, the identified episodes of anomalous seismic activity were

  13. Earthquake watch

    USGS Publications Warehouse

    Hill, M.

    1976-01-01

     When the time comes that earthquakes can be predicted accurately, what shall we do with the knowledge? This was the theme of a November 1975 conference on earthquake warning and response held in San Francisco called by Assistant Secretary of the Interior Jack W. Carlson. Invited were officials of State and local governments from Alaska, California, Hawaii, Idaho, Montana, Nevada, utah, Washington, and Wyoming and representatives of the news media. 

  14. Seismo-induced effects in the near-earth space: Combined ground and space investigations as a contribution to earthquake prediction

    NASA Astrophysics Data System (ADS)

    Sgrigna, V.; Buzzi, A.; Conti, L.; Picozza, P.; Stagni, C.; Zilpimiani, D.

    2007-02-01

    The paper aims at giving a few methodological suggestions in deterministic earthquake prediction studies based on combined ground-based and space observations of earthquake precursors. Up to now what is lacking is the demonstration of a causal relationship with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. Coordinated space and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of LEO satellites. At this purpose a new result reported in the paper is an original and specific space mission project (ESPERIA) and two instruments of its payload. The ESPERIA space project has been performed for the Italian Space Agency and three ESPERIA instruments (ARINA and LAZIO particle detectors, and EGLE search-coil magnetometer) have been built and tested in space. The EGLE experiment started last April 15, 2005 on board the ISS, within the ENEIDE mission. The launch of ARINA occurred on June 15, 2006, on board the RESURS DK-1 Russian LEO satellite. As an introduction and justification to these experiments the paper clarifies some basic concepts and critical methodological aspects concerning deterministic and statistic approaches and their use in earthquake prediction. We also take the liberty of giving the scientific community a few critical hints based on our personal experience in the field and propose a joint study devoted to earthquake prediction and warning.

  15. Safety and survival in an earthquake

    USGS Publications Warehouse

    ,

    1969-01-01

    Many earth scientists in this country and abroad are focusing their studies on the search for means of predicting impending earthquakes, but, as yet, an accurate prediction of the time and place of such an event cannot be made. From past experience, however, one can assume that earthquakes will continue to harass mankind and that they will occur most frequently in the areas where they have been relatively common in the past. In the United States, earthquakes can be expected to occur most frequently in the western states, particularly in Alaska, California, Washington, Oregon, Nevada, Utah, and Montana. The danger, however, is not confined to any one part of the country; major earthquakes have occurred at widely scattered locations.

  16. Issues on the Japanese Earthquake Hazard Evaluation

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Fukushima, Y.; Sagiya, T.

    2013-12-01

    The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in

  17. Reply to “Statistical evaluation of the VAN Method using the historic earthquake catalog in Greece,” by Richard L. Aceves, Stephen K. Park and David J. Strauss

    NASA Astrophysics Data System (ADS)

    Varotsos, P.; Lazaridou, M.

    The pioneering calculation by Aceves et al. [1996] shed light on the main question of this debate, i.e., on whether “VAN predictions can be ascribed to chance.” Aceves et al. [1996] conclude that “the VAN method has resulted in a significantly higher prediction rate than randomly sampling a PDF (probability density function) map generated from a 25 year history of earthquakes.” After investigating the totality of VAN predictions issued during the period 1987-1989, Aceves et al. [1996] found: “The prediction rate for the VAN method clearly exceeds that from the random model at all time lags between 5-22 days. At a 5 day time lag, the VAN prediction rate of 35.7% has a P-value of less than 0.06%. This means that a random model does as well as does the VAN method less than 0.06% of the time. At 22 days, the prediction rate of 67.9% has a P-value of less than 0.07%.” These conclusions basically coincide with those of Hamada [1993] although Aceves et al. [1996] followed different procedures. They are also in fundamental agreement with the results of Honkura and Tanaka [1996]. Another important conclusion of Aceves et al. [1996] is that, after declustering the earthquake catalog and prediction list from aftershocks, “VAN method is still formally significant.”

  18. Echo-sounding method aids earthquake hazard studies

    USGS Publications Warehouse

    ,

    1995-01-01

    Dramatic examples of catastrophic damage from an earthquake occurred in 1989, when the M 7.1 Lorna Prieta rocked the San Francisco Bay area, and in 1994, when the M 6.6 Northridge earthquake jolted southern California. The surprising amount and distribution of damage to private property and infrastructure emphasizes the importance of seismic-hazard research in urbanized areas, where the potential for damage and loss of life is greatest. During April 1995, a group of scientists from the U.S. Geological Survey and the University of Tennessee, using an echo-sounding method described below, is collecting data in San Antonio Park, California, to examine the Monte Vista fault which runs through this park. The Monte Vista fault in this vicinity shows evidence of movement within the last 10,000 years or so. The data will give them a "picture" of the subsurface rock deformation near this fault. The data will also be used to help locate a trench that will be dug across the fault by scientists from William Lettis & Associates.

  19. Prospective Validation of Pre-earthquake Atmospheric Signals and Their Potential for Short–term Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Lee, Lou; Liu, Tiger; Kafatos, Menas

    2015-04-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. Our challenge question is: "Whether such pre-earthquake atmospheric/ionospheric signals are significant and could be useful for early warning of large earthquakes?" To check the predictive potential of atmospheric pre-earthquake signals we have started to validate anomalous ionospheric / atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (STIR), electron concentration in the ionosphere (GPS/TEC), radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 (2) Prospective testing of STIR anomalies with potential for M5.5+ events. The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show STIR anomalous behavior before all of these events with false negatives close to zero. False alarm ratio for false positives is less then 25%. The initial prospective testing for STIR shows systematic appearance of anomalies in advance (1-30 days) to the M5.5+ events for Taiwan, Kamchatka-Sakhalin (Russia) and Japan. Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be

  20. Moment Magnitudes and Local Magnitudes for Small Earthquakes: Implications for Ground-Motion Prediction and b-values

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Hanks, T. C.; Vernon, F.

    2016-12-01

    We illustrate two essential consequences of the systematic difference between moment magnitude and local magnitude for small earthquakes, illuminating the underlying earthquake physics. Moment magnitude, M 2/3 log M0, is uniformly valid for all earthquake sizes [Hanks and Kanamori, 1979]. However, the relationship between local magnitude ML and moment is itself magnitude dependent. For moderate events, 3< M < 7, M and M­L are coincident; for earthquakes smaller than M3, ML log M0 [Hanks and Boore, 1984]. This is a consequence of the saturation of the apparent corner frequency fc as it becoming greater than the largest observable frequency, fmax; In this regime, stress drop no longer controls ground motion. This implies that ML and M differ by a factor of 1.5 for these small events. While this idea is not new, its implications are important as more small-magnitude data are incorporated into earthquake hazard research. With a large dataset of M<3 earthquakes recorded on the ANZA network, we demonstrate striking consequences of the difference between M and ML. ML scales as the log peak ground motions (e.g., PGA or PGV) for these small earthquakes, which yields log PGA log M0 [Boore, 1986]. We plot nearly 15,000 records of PGA and PGV at close stations, adjusted for site conditions and for geometrical spreading to 10 km. The slope of the log of ground motion is 1.0*ML­, or 1.5*M, confirming the relationship, and that fc >> fmax. Just as importantly, if this relation is overlooked, prediction of large-magnitude ground motion from small earthquakes will be misguided. We also consider the effect of this magnitude scale difference on b-value. The oft-cited b-value of 1 should hold for small magnitudes, given M. Use of ML necessitates b=2/3 for the same data set; use of mixed, or unknown, magnitudes complicates the matter further. This is of particular import when estimating the rate of large earthquakes when one has limited data on their recurrence, as is the case for

  1. Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites

    NASA Astrophysics Data System (ADS)

    Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.

    2017-12-01

    How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.

  2. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  3. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-02

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  4. Statistical validation of earthquake related observations

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  5. Earthquake Predictability: Results From Aggregating Seismicity Data And Assessment Of Theoretical Individual Cases Via Synthetic Data

    NASA Astrophysics Data System (ADS)

    Adamaki, A.; Roberts, R.

    2016-12-01

    For many years an important aim in seismological studies has been forecasting the occurrence of large earthquakes. Despite some well-established statistical behavior of earthquake sequences, expressed by e.g. the Omori law for aftershock sequences and the Gutenburg-Richter distribution of event magnitudes, purely statistical approaches to short-term earthquake prediction have in general not been successful. It seems that better understanding of the processes leading to critical stress build-up prior to larger events is necessary to identify useful precursory activity, if this exists, and statistical analyses are an important tool in this context. There has been considerable debate on the usefulness or otherwise of foreshock studies for short-term earthquake prediction. We investigate generic patterns of foreshock activity using aggregated data and by studying not only strong but also moderate magnitude events. Aggregating empirical local seismicity time series prior to larger events observed in and around Greece reveals a statistically significant increasing rate of seismicity over 20 days prior to M>3.5 earthquakes. This increase cannot be explained by tempo-spatial clustering models such as ETAS, implying genuine changes in the mechanical situation just prior to larger events and thus the possible existence of useful precursory information. Because of tempo-spatial clustering, including aftershocks to foreshocks, even if such generic behavior exists it does not necessarily follow that foreshocks have the potential to provide useful precursory information for individual larger events. Using synthetic catalogs produced based on different clustering models and different presumed system sensitivities we are now investigating to what extent the apparently established generic foreshock rate acceleration may or may not imply that the foreshocks have potential in the context of routine forecasting of larger events. Preliminary results suggest that this is the case, but

  6. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    PubMed

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  7. Ground Motion Prediction for M7+ scenarios on the San Andreas Fault using the Virtual Earthquake Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.

    2013-05-01

    There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.

  8. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  9. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  10. Uncertainty, variability, and earthquake physics in ground‐motion prediction equations

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.

    2017-01-01

    Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20  km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.

  11. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  12. Earthquake Hazard Assessment: Basics of Evaluation

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2016-04-01

    Seismic hazard assessment (SHA) is not an easy task that implies a delicate application of statistics to data of limited size and different accuracy. Earthquakes follow the Unified Scaling Law that generalizes the Gutenberg-Richter relationship by taking into account naturally fractal distribution of their sources. Moreover, earthquakes, including the great and mega events, are clustered in time and their sequences have irregular recurrence intervals. Furthermore, earthquake related observations are limited to the recent most decades (or centuries in just a few rare cases). Evidently, all this complicates reliable assessment of seismic hazard and associated risks. Making SHA claims, either termless or time dependent (so-called t-DASH), quantitatively probabilistic in the frames of the most popular objectivists' viewpoint on probability requires a long series of "yes/no" trials, which cannot be obtained without an extended rigorous testing of the method predictions against real observations. Therefore, we reiterate the necessity and possibility of applying the modified tools of Earthquake Prediction Strategies, in particular, the Error Diagram, introduced by G.M. Molchan in early 1990ies for evaluation of SHA, and the Seismic Roulette null-hypothesis as a measure of the alerted space. The set of errors, i.e. the rates of failure and of the alerted space-time volume, compared to those obtained in the same number of random guess trials permits evaluating the SHA method effectiveness and determining the optimal choice of the parameters in regard to specified cost-benefit functions. These and other information obtained in such a testing supplies us with a realistic estimate of confidence in SHA results and related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. These basics of SHA evaluation are exemplified in brief with a few examples, which analyses in more detail are given in a poster of

  13. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  14. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  15. Adaptive vibration control of structures under earthquakes

    NASA Astrophysics Data System (ADS)

    Lew, Jiann-Shiun; Juang, Jer-Nan; Loh, Chin-Hsiung

    2017-04-01

    techniques, for structural vibration suppression under earthquakes. Various control strategies have been developed to protect structures from natural hazards and improve the comfort of occupants in buildings. However, there has been little development of adaptive building control with the integration of real-time system identification and control design. Generalized predictive control, which combines the process of real-time system identification and the process of predictive control design, has received widespread acceptance and has been successfully applied to various test-beds. This paper presents a formulation of the predictive control scheme for adaptive vibration control of structures under earthquakes. Comprehensive simulations are performed to demonstrate and validate the proposed adaptive control technique for earthquake-induced vibration of a building.

  16. Relocation of the 2010-2013 near the north coast of Papua earthquake sequence using Modified Joint Hypocenter Determination (MJHD) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salomo, Dimas, E-mail: dimas.salomo@gmail.com; Daryono,; Subakti, Hendri

    The accuracy of earthquake hypocenter position is necessary to analyze the tectonic conditions. This study aims to: (1) relocate the mainshock and aftershocks of the large earthquakes in Papua region i.e. June 16, 2010, April 21, 2012 and April 06, 2013 earthquake (2) determine the true fault plane, (3) estimate the area of the fracture, and (4) analyze the advantages and disadvantages of relocation with MJHD method in benefits for tectonic studies. This study used Modified Joint Hypocenter Determination (MJHD) method. Using P arrival phase data reported by the BMKG and openly available from website repogempa.bmkg.go.id, we relocated the mainshockmore » of this large significant earthquake and its aftershocks. Then we identified the prefered fault planes from the candidate fault planes provided by the global CMT catalogue. The position of earthquakes was successfully relocated. The earthquakes mostly were clustered around the mainshock. Earthquakes that not clustered around mainshock are considered to be different mechanism from the mainshock. Relocation results indicate that the mainshock fault plane of June 16, 2010 earthquake is a field with strike 332o, dip 80o and −172o slip, the mainshock fault plane of April 21, 2012 earthquake is a field with strike 82o, dip 84o and 2o slip, the mainshock fault plane of April 06, 2013 earthquake is a field with strike 339o, dip 56o and −137o slip. Fault plane area estimated by cross section graphical method is an area of 2816.0 km2 (June 16, 2010), 906.2 km2 (April 21, 2012) and 1984.3 km2 (April 06, 2013). MJHD method has the advantage that it can calculate a lot of earthquakes simultaneously and has a station correction to account for lateral heterogeneity of the earth. This method successfully provides significant changes to improve the position of the depth of earthquakes that most of the hypocenter depth manually specified as a fixed depth (± 10 km). But this method cannot be sure that the hypocenters derived

  17. Earthquake Shaking - Finding the "Hot Spots"

    USGS Publications Warehouse

    Field, Edward; Jones, Lucile; Jordan, Tom; Benthien, Mark; Wald, Lisa

    2001-01-01

    A new Southern California Earthquake Center study has quantified how local geologic conditions affect the shaking experienced in an earthquake. The important geologic factors at a site are softness of the rock or soil near the surface and thickness of the sediments above hard bedrock. Even when these 'site effects' are taken into account, however, each earthquake exhibits unique 'hotspots' of anomalously strong shaking. Better predictions of strong ground shaking will therefore require additional geologic data and more comprehensive computer simulations of individual earthquakes.

  18. Nowcasting Earthquakes: A Comparison of Induced Earthquakes in Oklahoma and at the Geysers, California

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.

    2018-01-01

    Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.

  19. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  20. Lessons of L'Aquila for Operational Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2012-12-01

    The L'Aquila earthquake of 6 Apr 2009 (magnitude 6.3) killed 309 people and left tens of thousands homeless. The mainshock was preceded by a vigorous seismic sequence that prompted informal earthquake predictions and evacuations. In an attempt to calm the population, the Italian Department of Civil Protection (DPC) convened its Commission on the Forecasting and Prevention of Major Risk (MRC) in L'Aquila on 31 March 2009 and issued statements about the hazard that were widely received as an "anti-alarm"; i.e., a deterministic prediction that there would not be a major earthquake. On October 23, 2012, a court in L'Aquila convicted the vice-director of DPC and six scientists and engineers who attended the MRC meeting on charges of criminal manslaughter, and it sentenced each to six years in prison. A few weeks after the L'Aquila disaster, the Italian government convened an International Commission on Earthquake Forecasting for Civil Protection (ICEF) with the mandate to assess the status of short-term forecasting methods and to recommend how they should be used in civil protection. The ICEF, which I chaired, issued its findings and recommendations on 2 Oct 2009 and published its final report, "Operational Earthquake Forecasting: Status of Knowledge and Guidelines for Implementation," in Aug 2011 (www.annalsofgeophysics.eu/index.php/annals/article/view/5350). As defined by the Commission, operational earthquake forecasting (OEF) involves two key activities: the continual updating of authoritative information about the future occurrence of potentially damaging earthquakes, and the officially sanctioned dissemination of this information to enhance earthquake preparedness in threatened communities. Among the main lessons of L'Aquila is the need to separate the role of science advisors, whose job is to provide objective information about natural hazards, from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms

  1. Retrospective Evaluation of the Long-Term CSEP-Italy Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.

    2010-12-01

    On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We considered the twelve time-independent earthquake forecasts among this set and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. Here, we present the results of tests that measure the consistency of the forecasts with the past observations. Besides being an evaluation of the submitted time-independent forecasts, this exercise provided insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between the robustness of results and experiment duration.

  2. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  3. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  4. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.

  5. Prediction of maximum earthquake intensities for the San Francisco Bay region

    USGS Publications Warehouse

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  6. Earthquake source tensor inversion with the gCAP method and 3D Green's functions

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.

    2013-12-01

    We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.

  7. ShakeMap-based prediction of earthquake-induced mass movements in Switzerland calibrated on historical observations

    USGS Publications Warehouse

    Cauzzi, Carlo; Fah, Donat; Wald, David J.; Clinton, John; Losey, Stephane; Wiemer, Stefan

    2018-01-01

    In Switzerland, nearly all historical Mw ~ 6 earthquakes have induced damaging landslides, rockslides and snow avalanches that, in some cases, also resulted in damage to infrastructure and loss of lives. We describe the customisation to Swiss conditions of a globally calibrated statistical approach originally developed to rapidly assess earthquake-induced landslide likelihoods worldwide. The probability of occurrence of such earthquake-induced effects is modelled through a set of geospatial susceptibility proxies and peak ground acceleration. The predictive model is tuned to capture the observations from past events and optimised for near-real-time estimates based on USGS-style ShakeMaps routinely produced by the Swiss Seismological Service. Our emphasis is on the use of high-resolution geospatial datasets along with additional local information on ground failure susceptibility. Even if calibrated on historic events with moderate magnitudes, the methodology presented in this paper yields sensible results also for low-magnitude recent events. The model is integrated in the Swiss ShakeMap framework. This study has a high practical relevance to many Swiss ShakeMap stakeholders, especially those managing lifeline systems, and to other global users interested in conducting a similar customisation for their region of interest.

  8. The 26 January 2001 M 7.6 Bhuj, India, earthquake: Observed and predicted ground motions

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.; Bilham, R.; Atkinson, G.M.

    2002-01-01

    Although local and regional instrumental recordings of the devastating 26, January 2001, Bhuj earthquake are sparse, the distribution of macroseismic effects can provide important constraints on the mainshock ground motions. We compiled available news accounts describing damage and other effects and interpreted them to obtain modified Mercalli intensities (MMIs) at >200 locations throughout the Indian subcontinent. These values are then used to map the intensity distribution throughout the subcontinent using a simple mathematical interpolation method. Although preliminary, the maps reveal several interesting features. Within the Kachchh region, the most heavily damaged villages are concentrated toward the western edge of the inferred fault, consistent with western directivity. Significant sediment-induced amplification is also suggested at a number of locations around the Gulf of Kachchh to the south of the epicenter. Away from the Kachchh region, intensities were clearly amplified significantly in areas that are along rivers, within deltas, or on coastal alluvium, such as mudflats and salt pans. In addition, we use fault-rupture parameters inferred from teleseismic data to predict shaking intensity at distances of 0-1000 km. We then convert the predicted hard-rock ground-motion parameters to MMI by using a relationship (derived from Internet-based intensity surveys) that assigns MMI based on the average effects in a region. The predicted MMIs are typically lower by 1-3 units than those estimated from news accounts, although they do predict near-field ground motions of approximately 80%g and potentially damaging ground motions on hard-rock sites to distances of approximately 300 km. For the most part, this discrepancy is consistent with the expected effect of sediment response, but it could also reflect other factors, such as unusually high building vulnerability in the Bhuj region and a tendency for media accounts to focus on the most dramatic damage, rather than

  9. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  10. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  11. Comparisons of ground motions from five aftershocks of the 1999 Chi-Chi, Taiwan, earthquake with empirical predictions largely based on data from California

    USGS Publications Warehouse

    Wang, G.-Q.; Boore, D.M.; Igel, H.; Zhou, X.-Y.

    2004-01-01

    The observed ground motions from five large aftershocks of the 1999 Chi-Chi, Taiwan, earthquake are compared with predictions from four equations based primarily on data from California. The four equations for active tectonic regions are those developed by Abrahamson and Silva (1997), Boore et al. (1997), Campbell (1997, 2001), and Sadigh et al. (1997). Comparisons are made for horizontal-component peak ground accelerations and 5%-damped pseudoacceleration response spectra at periods between 0.02 sec and 5 sec. The observed motions are in reasonable agreement with the predictions, particularly for distances from 10 to 30 km. This is in marked contrast to the motions from the Chi-Chi mainshock, which are much lower than the predicted motions for periods less than about 1 sec. The results indicate that the low motions in the mainshock are not due to unusual, localized absorption of seismic energy, because waves from the mainshock and the aftershocks generally traverse the same section of the crust and are recorded at the same stations. The aftershock motions at distances of 30-60 km are somewhat lower than the predictions (but not nearly by as small a factor as those for the mainshock), suggesting that the ground motion attenuates more rapidly in this region of Taiwan than it does in the areas we compare with it. We provide equations for the regional attenuation of response spectra, which show increasing decay of motion with distance for decreasing oscillator periods. This observational study also demonstrates that ground motions have large earthquake-location-dependent variability for a specific site. This variability reduces the accuracy with which an earthquake-specific prediction of site response can be predicted. Online Material: PGAs and PSAs from the 1999 Chi-Chi earthquake and five aftershocks.

  12. Distant, delayed and ancient earthquake-induced landslides

    NASA Astrophysics Data System (ADS)

    Havenith, Hans-Balder; Torgoev, Almaz; Braun, Anika; Schlögel, Romy; Micu, Mihai

    2016-04-01

    On the basis of a new classification of seismically induced landslides we outline particular effects related to the delayed and distant triggering of landslides. Those cannot be predicted by state-of-the-art methods. First, for about a dozen events the 'predicted' extension of the affected area is clearly underestimated. The most problematic cases are those for which far-distant triggering of landslides had been reported, such as for the 1988 Saguenay earthquake. In Central Asia reports for such cases are known for areas marked by a thick cover of loess. One possible contributing effect could be a low-frequency resonance of the thick soils induced by distant earthquakes, especially those in the Pamir - Hindu Kush seismic region. Such deep focal and high magnitude (>>7) earthquakes are also found in Europe, first of all in the Vrancea region (Romania). For this area and others in Central Asia we computed landslide event sizes related to scenario earthquakes with M>7.5. The second particular and challenging type of triggering is the one delayed with respect to the main earthquake event: case histories have been reported for the Racha earthquake in 1991 when several larger landslides only started moving 2 or 3 days after the main shock. Similar observations were also made after other earthquake events in the U.S., such as after the 1906 San Francisco, the 1949 Tacoma, the 1959 Hebgen Lake and the 1983 Bora Peak earthquakes. Here, we will present a series of detailed examples of (partly monitored) mass movements in Central Asia that mainly developed after earthquakes, some even several weeks after the main shock: e.g. the Tektonik and Kainama landslides triggered in 1992 and 2004, respectively. We believe that the development of the massive failures is a consequence of the opening of tension cracks during the seismic shaking and their filling up with water during precipitations that followed the earthquakes. The third particular aspect analysed here is the use of large

  13. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  14. A stress-constrained geodetic inversion method for spatiotemporal slip of a slow slip event with earthquake swarm

    NASA Astrophysics Data System (ADS)

    Hirose, H.; Tanaka, T.

    2017-12-01

    Geodetic inversions have been performed by using GNSS data and/or tiltmeter data in order to estimate spatio-temporal fault slip distributions. They have been applied for slow slip events (SSEs), which are episodic fault slip lasting for days to years (e.g., Ozawa et al., 2001; Hirose et al., 2014). Although their slip distributions are important information in terms of inferring strain budget and frictional characteristics on a subduction plate interface, inhomogeneous station coverage generally yields spatially non-uniform slip resolution, and in a worse case, a slip distribution can not be recovered. It is known that an SSE which accompanies an earthquake swarm around the SSE slip area, such as the Boso Peninsula SSEs (e.g., Hirose et al., 2014). Some researchers hypothesize that these earthquakes are triggered by a stress change caused by the accompanying SSE (e.g., Segall et al., 2006). Based on this assumption, it is possible that a conventional geodetic inversion which impose a constraint on the stress change that promotes earthquake activities may improve the resolution of the slip distribution. Here we develop an inversion method based on the Network Inversion Filter technique (Segall and Matthews, 1997), incorporating a constraint on a positive change in Coulomb failure stress (Delta-CFS) at the accompanied earthquakes. In addition, we apply this new method to synthetic data in order to check the effectiveness of the method and the characteristics of the inverted slip distributions. The results show that there is a case in which the reproduction of a slip distribution is better with earthquake information than without it. That is, it is possible to improve the reproducibility of a slip distribution of an SSE with this new inversion method if an earthquake catalog for the accompanying earthquake activity can be used when available geodetic data are insufficient.

  15. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    USGS Publications Warehouse

    Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.

    2014-01-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0–10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking

  16. Space technologies for short-term earthquake warning

    NASA Astrophysics Data System (ADS)

    Pulinets, S.

    Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) appearing several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquake prediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquake prediction. It takes into account the following considerations: Selection of the precursors in the terms of priority, taking into account their statistical and physical parameters Configuration of the spacecraft payload Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule) Proposal of different options (cheap microsatellite or comprehensive multisatellite constellation) Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention will be devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies will be considered.

  17. Space technologies for short-term earthquake warning

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.

    Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) which appear several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquake prediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquake prediction. It takes into account the following: Selection of the precursors in the terms of priority, considering their statistical and physical parameters.Configuration of the spacecraft payload.Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule).Different options of the satellite systems (cheap microsatellite or comprehensive multisatellite constellation). Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention is devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies are considered.

  18. Modified Mercalli Intensity for scenario earthquakes in Evansville, Indiana

    USGS Publications Warehouse

    Cramer, Chris; Haase, Jennifer; Boyd, Oliver

    2012-01-01

    Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the fact that Evansville is close to the Wabash Valley and New Madrid seismic zones, there is concern about the hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake. Earthquake-hazard maps provide one way of conveying such estimates of strong ground shaking and will help the region prepare for future earthquakes and reduce earthquake-caused losses.

  19. Ground Motion Simulation for a Large Active Fault System using Empirical Green's Function Method and the Strong Motion Prediction Recipe - a Case Study of the Noubi Fault Zone -

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Kumamoto, T.; Fujita, M.

    2005-12-01

    The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture

  20. A New Network-Based Approach for the Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.

    2017-12-01

    Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of

  1. Long aftershock sequences within continents and implications for earthquake hazard assessment.

    PubMed

    Stein, Seth; Liu, Mian

    2009-11-05

    One of the most powerful features of plate tectonics is that the known plate motions give insight into both the locations and average recurrence interval of future large earthquakes on plate boundaries. Plate tectonics gives no insight, however, into where and when earthquakes will occur within plates, because the interiors of ideal plates should not deform. As a result, within plate interiors, assessments of earthquake hazards rely heavily on the assumption that the locations of small earthquakes shown by the short historical record reflect continuing deformation that will cause future large earthquakes. Here, however, we show that many of these recent earthquakes are probably aftershocks of large earthquakes that occurred hundreds of years ago. We present a simple model predicting that the length of aftershock sequences varies inversely with the rate at which faults are loaded. Aftershock sequences within the slowly deforming continents are predicted to be significantly longer than the decade typically observed at rapidly loaded plate boundaries. These predictions are in accord with observations. So the common practice of treating continental earthquakes as steady-state seismicity overestimates the hazard in presently active areas and underestimates it elsewhere.

  2. Earthquake warning system for Japan Railways’ bullet train; implications for disaster prevention in California

    USGS Publications Warehouse

    Nakamura, Y.; Tucker, B. E.

    1988-01-01

    Today, Japanese society is well aware of the prediction of the Tokai earthquake. It is estimated by the Tokyo earthquake. It is estimated by the Tokyo muncipal government that this predicted earthquake could kill 30,000 people. (this estimate is viewed by many as conservative; other Japanese government agencies have made estimates but they have not been published.) Reduction in the number deaths from 120,000 to 30,000 between the Kanto earthquake and the predicted Tokai earthquake is due in large part to the reduction in the proportion of wooden construction (houses). 

  3. Weather Satellite Thermal IR Responses Prior to Earthquakes

    NASA Technical Reports Server (NTRS)

    OConnor, Daniel P.

    2005-01-01

    A number of observers claim to have seen thermal anomalies prior to earthquakes, but subsequent analysis by others has failed to produce similar findings. What exactly are these anomalies? Might they be useful for earthquake prediction? It is the purpose of this study to determine if thermal anomalies can be found in association with known earthquakes by systematically co-registering weather satellite images at the sub-pixel level and then determining if statistically significant responses occurred prior to the earthquake event. A new set of automatic co-registration procedures was developed for this task to accommodate all properties particular to weather satellite observations taken at night, and it relies on the general condition that the ground cools after sunset. Using these procedures, we can produce a set of temperature-sensitive satellite images for each of five selected earthquakes (Algeria 2003; Bhuj, India 2001; Izmit, Turkey 2001; Kunlun Shan, Tibet 2001; Turkmenistan 2000) and thus more effectively investigate heating trends close to the epicenters a few hours prior to the earthquake events. This study will lay tracks for further work in earthquake prediction and provoke the question of the exact nature of the thermal anomalies.

  4. VLF/LF Radio Sounding of Ionospheric Perturbations Associated with Earthquakes

    PubMed Central

    Hayakawa, Masashi

    2007-01-01

    It is recently recognized that the ionosphere is very sensitive to seismic effects, and the detection of ionospheric perturbations associated with earthquakes, seems to be very promising for short-term earthquake prediction. We have proposed a possible use of VLF/LF (very low frequency (3-30 kHz) /low frequency (30-300 kHz)) radio sounding of the seismo-ionospheric perturbations. A brief history of the use of subionospheric VLF/LF propagation for the short-term earthquake prediction is given, followed by a significant finding of ionospheric perturbation for the Kobe earthquake in 1995. After showing previous VLF/LF results, we present the latest VLF/LF findings; One is the statistical correlation of the ionospheric perturbation with earthquakes and the second is a case study for the Sumatra earthquake in December, 2004, indicating the spatical scale and dynamics of ionospheric perturbation for this earthquake.

  5. Predicted Surface Displacements for Scenario Earthquakes in the San Francisco Bay Region

    USGS Publications Warehouse

    Murray-Moraleda, Jessica R.

    2008-01-01

    In the immediate aftermath of a major earthquake, the U.S. Geological Survey (USGS) will be called upon to provide information on the characteristics of the event to emergency responders and the media. One such piece of information is the expected surface displacement due to the earthquake. In conducting probabilistic hazard analyses for the San Francisco Bay Region, the Working Group on California Earthquake Probabilities (WGCEP) identified a series of scenario earthquakes involving the major faults of the region, and these were used in their 2003 report (hereafter referred to as WG03) and the recently released 2008 Uniform California Earthquake Rupture Forecast (UCERF). Here I present a collection of maps depicting the expected surface displacement resulting from those scenario earthquakes. The USGS has conducted frequent Global Positioning System (GPS) surveys throughout northern California for nearly two decades, generating a solid baseline of interseismic measurements. Following an earthquake, temporary GPS deployments at these sites will be important to augment the spatial coverage provided by continuous GPS sites for recording postseismic deformation, as will the acquisition of Interferometric Synthetic Aperture Radar (InSAR) scenes. The information provided in this report allows one to anticipate, for a given event, where the largest displacements are likely to occur. This information is valuable both for assessing the need for further spatial densification of GPS coverage before an event and prioritizing sites to resurvey and InSAR data to acquire in the immediate aftermath of the earthquake. In addition, these maps are envisioned to be a resource for scientists in communicating with emergency responders and members of the press, particularly during the time immediately after a major earthquake before displacements recorded by continuous GPS stations are available.

  6. Geoelectric precursors to strong earthquakes in China

    NASA Astrophysics Data System (ADS)

    Yulin, Zhao; Fuye, Qian

    1994-05-01

    The main results of searching for electrical precursors to strong earthquakes in China for the last 25 yr are presented. This comprises: the continuous twenty-year resistivity record before and after the great Tangshan earthquake of 1976; spatial and temporal variations in resistivity anomalies observed at more than 6 stations within 150 km of the Tangshan earthquake epicenter; the travel-time curve for the front of the resistivity precursor; and a method of intersection for predicting the epicenter location. These results reveal a number of interesting facts: (1) Resistivity measurements with accuracies of 0.5% or better for over 20 yr show that resistivity decreases of several percent, which began approximately 3 yr prior to the Tangshan earthquake, were larger than the background fluctuations and hence statistically significant. An outstanding example of an intermediate-term resistivity precursor is given. (2) The intermediate-term resistivity precursor decrease before Tangshan earthquake is such a pervasive phenomenon that the mean decrease, in percent, can be contoured on a map of the Beijing-Tianjin-Tangshan region. This shows the maximum decrease centered over the epicenter. (3) The anomalies in resistivity and self-potential, which began 2-0.5 months before the Tangshan main shock, had periods equal to that of the tidal waves M 2 and MS f, respectively, so that the associated anomalies can be identified as impending-earthquake precursors and a modal related to stress-displacement weakening is proposed.

  7. Seismic activity preceding the 2016 Kumamoto earthquakes: Multiple approaches to recognizing possible precursors

    NASA Astrophysics Data System (ADS)

    Nanjo, K.; Izutsu, J.; Orihara, Y.; Furuse, N.; Togo, S.; Nitta, H.; Okada, T.; Tanaka, R.; Kamogawa, M.; Nagao, T.

    2016-12-01

    We show the first results of recognizing seismic patterns as possible precursory episodes to the 2016 Kumamoto earthquakes, using existing four different methods: b-value method (e.g., Schorlemmer and Wiemer, 2005; Nanjo et al., 2012), two kinds of seismic quiescence evaluation methods (RTM-algorithm, Nagao et al., 2011; Z-value method, Wiemer and Wyss, 1994), and foreshock seismic density analysis based on Lippiello et al. (2012). We used the earthquake catalog maintained by the Japan Meteorological Agency (JMA). To ensure data quality, we performed catalog completeness check as a pre-processing step of individual analyses. Our finding indicates the methods we adopted do not allow the Kumamoto earthquakes to be predicted exactly. However, we found that the spatial extent of possible precursory patterns differs from one method to the other and ranges from local scales (typically asperity size), to regional scales (e.g., 2° × 3° around the source zone). The earthquakes are preceded by periods of pronounced anomalies, which lasted decade scales (e.g., 20 years or longer) to yearly scales (e.g., 1 2 years). Our results demonstrate that combination of multiple methods detects different signals prior to the Kumamoto earthquakes with more considerable reliability than if measured by single method. This strongly suggests great potential to reduce the possible future sites of earthquakes relative to long-term seismic hazard assessment. This study was partly supported by MEXT under its Earthquake and Volcano Hazards Observation and Research Program and Grant-in-Aid for Scientific Research (C), No. 26350483, 2014-2017, by Chubu University under the Collaboration Research Program of IDEAS, IDEAS201614, and by Tokai University under Project Resarch of IORD. A part of this presentation is given in Nanjo et al. (2016, submitted).

  8. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  9. Ground-motion parameters of the southwestern Indiana earthquake of 18 June 2002 and the disparity between the observed and predicted values

    USGS Publications Warehouse

    Street, R.; Wiegand, J.; Woolery, E.W.; Hart, P.

    2005-01-01

    The M 4.5 southwestern Indiana earthquake of 18 June 2002 triggered 46 blast monitors in Indiana, Illinois, and Kentucky. The resulting free-field particle velocity records, along with similar data from previous earthquakes in the study area, provide a clear standard for judging the reliability of current maps for predicting ground motions greater than 2 Hz in southwestern Indiana and southeastern Illinois. Peak horizontal accelerations and velocities, and 5% damped pseudo-accelerations for the earthquake, generally exceeded ground motions predicted for the top of the bedrock by factors of 2 or more, even after soil amplifications were taken into consideration. It is suggested, but not proven, that the low shear-wave velocity and weathered bedrock in the area are also amplifying the higher-frequency ground motions that have been repeatedly recorded by the blast monitors in the study area. It is also shown that there is a good correlation between the peak ground motions and 5% pseudo-accelerations recorded for the event, and the Modified Mercalli intensities interpreted for the event by the U.S. Geological Survey.

  10. Statistical earthquake focal mechanism forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2014-04-01

    Forecasts of the focal mechanisms of future shallow (depth 0-70 km) earthquakes are important for seismic hazard estimates and Coulomb stress, and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor earthquake catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow earthquakes are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of earthquakes included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.

  11. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed Central

    Kanamori, H

    1996-01-01

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657

  12. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  13. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  14. Earthquake mechanism and predictability shown by a laboratory fault

    USGS Publications Warehouse

    King, C.-Y.

    1994-01-01

    Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.

  15. Earthquake hypocenter relocation using double difference method in East Java and surrounding areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C, Aprilia Puspita; Meteorological, Climatological, and Geophysical Agency; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    Determination of precise hypocenter location is very important in order to provide information about subsurface fault plane and for seismic hazard analysis. In this study, we have relocated hypocenter earthquakes in Eastern part of Java and surrounding areas from local earthquake data catalog compiled by Meteorological, Climatological, and Geophysical Agency of Indonesia (MCGA) in time period 2009-2012 by using the double-difference method. The results show that after relocation processes, there are significantly changes in position and orientation of earthquake hypocenter which is correlated with the geological setting in this region. We observed indication of double seismic zone at depths ofmore » 70-120 km within the subducting slab in south of eastern part of Java region. Our results will provide useful information for advance seismological studies and seismic hazard analysis in this study.« less

  16. Scaling Relations of Earthquakes on Inland Active Mega-Fault Systems

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Matsushima, S.; Azuma, T.; Irikura, K.; Kitagawa, S.

    2010-12-01

    Since 2005, The Headquarters for Earthquake Research Promotion (HERP) has been publishing 'National Seismic Hazard Maps for Japan' to provide useful information for disaster prevention countermeasures for the country and local public agencies, as well as promote public awareness of disaster prevention of earthquakes. In the course of making the year 2009 version of the map, which is the commemorate of the tenth anniversary of the settlement of the Comprehensive Basic Policy, the methods to evaluate magnitude of earthquakes, to predict strong ground motion, and to construct underground structure were investigated in the Earthquake Research Committee and its subcommittees. In order to predict the magnitude of earthquakes occurring on mega-fault systems, we examined the scaling relations for mega-fault systems using 11 earthquakes of which source processes were analyzed by waveform inversion and of which surface information was investigated. As a result, we found that the data fit in between the scaling relations of seismic moment and rupture area by Somerville et al. (1999) and Irikura and Miyake (2001). We also found that maximum displacement of surface rupture is two to three times larger than the average slip on the seismic fault and surface fault length is equal to length of the source fault. Furthermore, compiled data of the source fault shows that displacement saturates at 10m when fault length(L) is beyond 100km, L>100km. By assuming the fault width (W) to be 18km in average of inland earthquakes in Japan, and the displacement saturate at 10m for length of more than 100 km, we derived a new scaling relation between source area and seismic moment, S[km^2] = 1.0 x 10^-17 M0 [Nm] for mega-fault systems that seismic moment (M0) exceeds 1.8×10^20 Nm.

  17. Prospective testing of Coulomb short-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  18. ELER software - a new tool for urban earthquake loss assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and

  19. Global earthquake fatalities and population

    USGS Publications Warehouse

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  20. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  1. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  2. a Collaborative Cyberinfrastructure for Earthquake Seismology

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  3. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  4. Two grave issues concerning the expected Tokai Earthquake

    NASA Astrophysics Data System (ADS)

    Mogi, K.

    2004-08-01

    The possibility of a great shallow earthquake (M 8) in the Tokai region, central Honshu, in the near future was pointed out by Mogi in 1969 and by the Coordinating Committee for Earthquake Prediction (CCEP), Japan (1970). In 1978, the government enacted the Large-Scale Earthquake Countermeasures Law and began to set up intensified observations in this region for short-term prediction of the expected Tokai earthquake. In this paper, two serious issues are pointed out, which may contribute to catastrophic effects in connection with the Tokai earthquake: 1. The danger of black-and-white predictions: According to the scenario based on the Large-Scale Earthquake Countermeasures Law, if abnormal crustal changes are observed, the Earthquake Assessment Committee (EAC) will determine whether or not there is an imminent danger. The findings are reported to the Prime Minister who decides whether to issue an official warning statement. Administrative policy clearly stipulates the measures to be taken in response to such a warning, and because the law presupposes the ability to predict a large earthquake accurately, there are drastic measures appropriate to the situation. The Tokai region is a densely populated region with high social and economic activity, and it is traversed by several vital transportation arteries. When a warning statement is issued, all transportation is to be halted. The Tokyo capital region would be cut off from the Nagoya and Osaka regions, and there would be a great impact on all of Japan. I (the former chairman of EAC) maintained that in view of the variety and complexity of precursory phenomena, it was inadvisable to attempt a black-and-white judgment as the basis for a "warning statement". I urged that the government adopt a "soft warning" system that acknowledges the uncertainty factor and that countermeasures be designed with that uncertainty in mind. 2. The danger of nuclear power plants in the focal region: Although the possibility of the

  5. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2013-04-01

    demonstrated and sufficient justification of hazard assessment protocols; (b) a more complete learning of the actual range of earthquake hazards to local communities and populations, and (c) a more ethically responsible control over how seismic hazard and seismic risk is implemented to protect public safety. It follows that the international project GEM is on the wrong track, if it continues to base seismic risk estimates on the standard method to assess seismic hazard. The situation is not hopeless and could be improved dramatically due to available geological, geomorphologic, seismic, and tectonic evidences and data combined with deterministic pattern recognition methodologies, specifically, when intending to PREDICT PREDICTABLE, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades. It proves that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such info in advance extreme catastrophes, which are LOW PROBABILITY EVENTS THAT HAPPEN WITH CERTAINTY. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve to optimistic challenging issues of neo-deterministic Hazard Predictability.

  6. Time-dependent earthquake forecasting: Method and application to the Italian region

    NASA Astrophysics Data System (ADS)

    Chan, C.; Sorensen, M. B.; Grünthal, G.; Hakimhashemi, A.; Heidbach, O.; Stromeyer, D.; Bosse, C.

    2009-12-01

    We develop a new approach for time-dependent earthquake forecasting and apply it to the Italian region. In our approach, the seismicity density is represented by a bandwidth function as a smoothing Kernel in the neighboring region of earthquakes. To consider the fault-interaction-based forecasting, we calculate the Coulomb stress change imparted by each earthquake in the study area. From this, the change of seismicity rate as a function of time can be estimated by the concept of rate-and-state stress transfer. We apply our approach to the region of Italy and earthquakes that occurred before 2003 to generate the seismicity density. To validate our approach, we compare our estimated seismicity density with the distribution of earthquakes with M≥3.8 after 2004. A positive correlation is found and all of the examined earthquakes locate in the area of the highest 66 percentile of seismicity density in the study region. Furthermore, the seismicity density corresponding to the epicenter of the 2009 April 6, Mw = 6.3, L’Aquila earthquake is in the area of the highest 5 percentile. For the time-dependent seismicity rate change, we estimate the rate-and-state stress transfer imparted by the M≥5.0 earthquakes occurred in the past 50 years. It suggests that the seismicity rate has increased at the locations of 65% of the examined earthquakes. Applying this approach to the L’Aquila sequence by considering seven M≥5.0 aftershocks as well as the main shock, not only spatial but also temporal forecasting of the aftershock distribution is significant.

  7. The ADER-DG method for seismic wave propagation and earthquake rupture dynamics

    NASA Astrophysics Data System (ADS)

    Pelties, Christian; Gabriel, Alice; Ampuero, Jean-Paul; de la Puente, Josep; Käser, Martin

    2013-04-01

    We will present the Arbitrary high-order DERivatives Discontinuous Galerkin (ADER-DG) method for solving the combined elastodynamic wave propagation and dynamic rupture problem. The ADER-DG method enables high-order accuracy in space and time while being implemented on unstructured tetrahedral meshes. A tetrahedral element discretization provides rapid and automatized mesh generation as well as geometrical flexibility. Features as mesh coarsening and local time stepping schemes can be applied to reduce computational efforts without introducing numerical artifacts. The method is well suited for parallelization and large scale high-performance computing since only directly neighboring elements exchange information via numerical fluxes. The concept of fluxes is a key ingredient of the numerical scheme as it governs the numerical dispersion and diffusion properties and allows to accommodate for boundary conditions, empirical friction laws of dynamic rupture processes, or the combination of different element types and non-conforming mesh transitions. After introducing fault dynamics into the ADER-DG framework, we will demonstrate its specific advantages in benchmarking test scenarios provided by the SCEC/USGS Spontaneous Rupture Code Verification Exercise. An important result of the benchmark is that the ADER-DG method avoids spurious high-frequency contributions in the slip rate spectra and therefore does not require artificial Kelvin-Voigt damping, filtering or other modifications of the produced synthetic seismograms. To demonstrate the capabilities of the proposed scheme we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes branching and curved fault segments. Furthermore, topography is respected in the discretized model to capture the surface waves correctly. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in

  8. Earthquakes triggered by fluid extraction

    USGS Publications Warehouse

    Segall, P.

    1989-01-01

    Seismicity is correlated in space and time with production from some oil and gas fields where pore pressures have declined by several tens of megapascals. Reverse faulting has occurred both above and below petroleum reservoirs, and normal faulting has occurred on the flanks of at least one reservoir. The theory of poroelasticity requires that fluid extraction locally alter the state of stress. Calculations with simple geometries predict stress perturbations that are consistent with observed earthquake locations and focal mechanisms. Measurements of surface displacement and strain, pore pressure, stress, and poroelastic rock properties in such areas could be used to test theoretical predictions and improve our understanding of earthquake mechanics. -Author

  9. Earthquake precursory events around epicenters and local active faults

    NASA Astrophysics Data System (ADS)

    Valizadeh Alvan, H.; Mansor, S. B.; Haydari Azad, F.

    2013-05-01

    shakes, mapping foreshocks and aftershocks, and following changes in the above-mentioned precursors prior to past earthquake instances all over the globe. Our analyses also encompass the geographical location and extents of local and regional faults which are considered as important factors during earthquakes. The co-analysis of direct and indirect observation for precursory events is considered as a promising method for possible future successful earthquake predictions. With proper and thorough knowledge about the geological setting, atmospheric factors and geodynamics of the earthquake-prone regions we will be able to identify anomalies due to seismic activity in the earth's crust.

  10. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  11. Earthquake chemical precursors in groundwater: a review

    NASA Astrophysics Data System (ADS)

    Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.

    2018-03-01

    We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

  12. People's perspectives and expectations on preparedness against earthquakes: Tehran case study.

    PubMed

    Jahangiri, Katayoun; Izadkhah, Yasamin Ostovar; Montazeri, Ali; Hosseinip, Mahmood

    2010-06-01

    Public education is one of the most important elements of earthquake preparedness. The present study identifies methods and appropriate strategies for public awareness and education on preparedness for earthquakes based on people's opinions in the city of Tehran. This was a cross-sectional study and a door-to-door survey of residents from 22 municipal districts in Tehran, the capital city of Iran. It involved a total of 1 211 individuals aged 15 and above. People were asked about different methods of public information and education, as well as the type of information needed for earthquake preparedness. "Enforcing the building contractors' compliance with the construction codes and regulations" was ranked as the first priority by 33.4% of the respondents. Over 70% of the participants (71.7%) regarded TV as the most appropriate means of media communication to prepare people for an earthquake. This was followed by "radio" which was selected by 51.6% of respondents. Slightly over 95% of the respondents believed that there would soon be an earthquake in the country, and 80% reported that they obtained this information from "the general public". Seventy percent of the study population felt that news of an earthquake should be communicated through the media. However, over fifty (58%) of the participants believed that governmental officials and agencies are best qualified to disseminate information about the risk of an imminent earthquake. Just over half (50.8%) of the respondents argued that the authorities do not usually provide enough information to people about earthquakes and the probability of their occurrence. Besides seismologists, respondents thought astronauts (32%), fortunetellers (32.3%), religious figures (34%), meteorologists (23%), and paleontologists (2%) can correctly predict the occurrence of an earthquake. Furthermore, 88.6% listed aid centers, mosques, newspapers and TV as the most important sources of information during the aftermath of an earthquake

  13. Laboratory investigations of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Xia, Kaiwen

    In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.

  14. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    NASA Astrophysics Data System (ADS)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  15. People’s perspectives and expectations on preparedness against earthquakes: Tehran case study

    PubMed Central

    Jahangiri, Katayoun; Izadkhah, Yasamin O; Montazeri, Ali; Hosseini, Mahmood

    2010-01-01

    Abstract: Background: Public education is one of the most important elements of earthquake preparedness. The present study identifies methods and appropriate strategies for public awareness and education on preparedness for earthquakes based on people's opinions in the city of Tehran. Methods: This was a cross-sectional study and a door-to-door survey of residents from 22 municipal districts in Tehran, the capital city of Iran. It involved a total of 1 211 individuals aged 15 and above. People were asked about different methods of public information and education, as well as the type of information needed for earthquake preparedness. Results: "Enforcing the building contractors' compliance with the construction codes and regulations" was ranked as the first priority by 33.4% of the respondents. Over 70% of the participants (71.7%) regarded TV as the most appropriate means of media communication to prepare people for an earthquake. This was followed by "radio" which was selected by 51.6% of respondents. Slightly over 95% of the respondents believed that there would soon be an earthquake in the country, and 80% reported that they obtained this information from "the general public". Seventy percent of the study population felt that news of an earthquake should be communicated through the media. However, over fifty (58%) of the participants believed that governmental officials and agencies are best qualified to disseminate information about the risk of an imminent earthquake. Just over half (50.8%) of the respondents argued that the authorities do not usually provide enough information to people about earthquakes and the probability of their occurrence. Besides seismologists, respondents thought astronauts (32%), fortunetellers (32.3%), religious figures (34%), meteorologists (23%), and paleontologists (2%) can correctly predict the occurrence of an earthquake. Furthermore, 88.6% listed aid centers, mosques, newspapers and TV as the most important sources of

  16. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  17. Statistical aspects and risks of human-caused earthquakes

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  18. The physics of an earthquake

    NASA Astrophysics Data System (ADS)

    McCloskey, John

    2008-03-01

    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  19. Possible seasonality in large deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Shearer, Peter M.

    2015-09-01

    Large deep-focus earthquakes (magnitude > 7.0, depth > 500 km) have exhibited strong seasonality in their occurrence times since the beginning of global earthquake catalogs. Of 60 such events from 1900 to the present, 42 have occurred in the middle half of each year. The seasonality appears strongest in the northwest Pacific subduction zones and weakest in the Tonga region. Taken at face value, the surplus of northern hemisphere summer events is statistically significant, but due to the ex post facto hypothesis testing, the absence of seasonality in smaller deep earthquakes, and the lack of a known physical triggering mechanism, we cannot rule out that the observed seasonality is just random chance. However, we can make a testable prediction of seasonality in future large deep-focus earthquakes, which, given likely earthquake occurrence rates, should be verified or falsified within a few decades. If confirmed, deep earthquake seasonality would challenge our current understanding of deep earthquakes.

  20. The Mw 7.7 Bhuj earthquake: Global lessons for earthquake hazard in intra-plate regions

    USGS Publications Warehouse

    Schweig, E.; Gomberg, J.; Petersen, M.; Ellis, M.; Bodin, P.; Mayrose, L.; Rastogi, B.K.

    2003-01-01

    The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.

  1. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictable earthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

  2. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  3. Impact of earthquakes on sex ratio at birth: Eastern Marmara earthquakes

    PubMed Central

    Doğer, Emek; Çakıroğlu, Yiğit; Köpük, Şule Yıldırım; Ceylan, Yasin; Şimşek, Hayal Uzelli; Çalışkan, Eray

    2013-01-01

    Objective: Previous reports suggest that maternal exposure to acute stress related to earthquakes affects the sex ratio at birth. Our aim was to examine the change in sex ratio at birth after Eastern Marmara earthquake disasters. Material and Methods: This study was performed using the official birth statistics from January 1997 to December 2002 – before and after 17 August 1999, the date of the Golcuk Earthquake – supplied from the Turkey Statistics Institute. The secondary sex ratio was expressed as the male proportion at birth, and the ratio of both affected and unaffected areas were calculated and compared on a monthly basis using data from gender with using the Chi-square test. Results: We observed significant decreases in the secondary sex ratio in the 4th and 8th months following an earthquake in the affected region compared to the unaffected region (p= 0.001 and p= 0.024). In the earthquake region, the decrease observed in the secondary sex ratio during the 8th month after an earthquake was specific to the period after the earthquake. Conclusion: Our study indicated a significant reduction in the secondary sex ratio after an earthquake. With these findings, events that cause sudden intense stress such as earthquakes can have an effect on the sex ratio at birth. PMID:24592082

  4. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  5. The next new Madrid earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, W.

    1988-01-01

    Scientists who specialize in the study of Mississippi Valley earthquakes say that the region is overdue for a powerful tremor that will cause major damage and undoubtedly some casualties. The inevitability of a future quake and the lack of preparation by both individuals and communities provided the impetus for this book. It brings together applicable information from many disciplines: history, geology and seismology, engineering, zoology, politics and community planning, economics, environmental science, sociology, and psychology and mental health to provide a perspective of the myriad impacts of a major earthquake on the Mississippi Valley. The author addresses such basic questionsmore » as What, actually, are earthquakes How do they occur Can they be predicted, perhaps even prevented He also addresses those steps that individuals can take to improve their chances for survival both during and after an earthquake.« less

  6. Earthquake Hazard Assessment: an Independent Review

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2016-04-01

    Seismic hazard assessment (SHA), from term-less (probabilistic PSHA or deterministic DSHA) to time-dependent (t-DASH) including short-term earthquake forecast/prediction (StEF), is not an easy task that implies a delicate application of statistics to data of limited size and different accuracy. Regretfully, in many cases of SHA, t-DASH, and StEF, the claims of a high potential and efficiency of the methodology are based on a flawed application of statistics and hardly suitable for communication to decision makers. The necessity and possibility of applying the modified tools of Earthquake Prediction Strategies, in particular, the Error Diagram, introduced by G.M. Molchan in early 1990ies for evaluation of SHA, and the Seismic Roulette null-hypothesis as a measure of the alerted space, is evident, and such a testing must be done in advance claiming hazardous areas and/or times. The set of errors, i.e. the rates of failure and of the alerted space-time volume, compared to those obtained in the same number of random guess trials permits evaluating the SHA method effectiveness and determining the optimal choice of the parameters in regard to specified cost-benefit functions. These and other information obtained in such a testing may supply us with a realistic estimate of confidence in SHA results and related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. These basics of SHA evaluation are exemplified with a few cases of misleading "seismic hazard maps", "precursors", and "forecast/prediction methods".

  7. If pandas scream. an earthquake is coming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magida, P.

    Feature article:Use of the behavior of animals to predict weather has spanned several ages and dozens of countries. While animals may behave in diverse ways to indicate weather changes, they all tend to behave in more or less the same way before earthquakes. The geophysical community in the U.S. has begun testing animal behavior before earthquakes. It has been determined that animals have the potential of acting as accurate geosensors to detect earthquakes before they occur. (5 drawings)

  8. Earthquake prognosis:cause for failure and ways for the problem solution

    NASA Astrophysics Data System (ADS)

    Kondratiev, O.

    2003-04-01

    Despite of the more than 50-years history of the development of the prognosis earthquake method this problem is yet not to be resolved. This makes one to have doubt in rightness of the chosen approaches retrospective search of the diverse earthquake precursors. It is obvious to speak of long-term, middle-term and short-term earthquake prognosis. They all have a probabilistic character and it would be more correct to consider them as related to the seismic hazard prognosis. In distinction of them, the problem of the operative prognosis is being discussed in report. The operative prognosis should conclude the opportune presenting of the seismic alarm signal of the place, time and power of the earthquake in order to take necessary measures for maximal mitigation of the catastrophic consequence of this event. To do this it is necessary to predict the earthquake location with accuracy of first dozens of kilometres, time of its occurrence with accuracy of the first days and its power with accuracy of the magnitude units. If the problem is formulated in such a way, it cannot principally be resolved in the framework of the concept of the indirect earthquake precursors using. It is necessary to pass from the concept of the passive observatory network to the concept of the object-oriented search of the potential source zones and direct information obtaining on the parameter medium changes within these zones in the process of the earthquake preparation and development. While formulated in this way, the problem becomes a integrated task for the planet and prospecting geophysics. To detect the source zones it is possible to use the method of the converted waves of earthquakes, for monitoring - seismic reflecting and method of the common point. Arrangement of these and possible other geophysical methods should be provided by organising the special integrated geophysic expedition of the rapid response on the occurred strong earthquakes and conducting purposeful investigation

  9. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  10. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  11. Predicting earthquakes by analyzing accelerating precursory seismic activity

    USGS Publications Warehouse

    Varnes, D.J.

    1989-01-01

    During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

  12. Report of the International Commission on Earthquake Forecasting for Civil Protection (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    The destructive L’Aquila earthquake of 6 April 2009 (Mw 6.3) illustrates the challenges of operational earthquake forecasting. The earthquake ruptured a mapped normal fault in a region identified by long-term forecasting models as one of the most seismically dangerous in Italy; it was the strongest of a rich sequence that started several months earlier and included a M3.9 foreshock less than five hours prior to the mainshock. According to widely circulated news reports, the earthquake had been predicted by a local resident using unpublished radon-based techniques, provoking a public controversy prior to the event that intensified in its wake. Several weeks after the earthquake, the Italian Department of Civil Protection appointed an international commission with the mandate to report on the current state of knowledge of prediction and forecasting and guidelines for operational utilization. The commission included geoscientists from China, France, Germany, Greece, Italy, Japan, Russia, United Kingdom, and United States with experience in earthquake forecasting and prediction. This presentation by the chair of the commission will report on its findings and recommendations.

  13. Methods for assessing the stability of slopes during earthquakes-A retrospective

    USGS Publications Warehouse

    Jibson, R.W.

    2011-01-01

    During the twentieth century, several methods to assess the stability of slopes during earthquakes were developed. Pseudostatic analysis was the earliest method; it involved simply adding a permanent body force representing the earthquake shaking to a static limit-equilibrium analysis. Stress-deformation analysis, a later development, involved much more complex modeling of slopes using a mesh in which the internal stresses and strains within elements are computed based on the applied external loads, including gravity and seismic loads. Stress-deformation analysis provided the most realistic model of slope behavior, but it is very complex and requires a high density of high-quality soil-property data as well as an accurate model of soil behavior. In 1965, Newmark developed a method that effectively bridges the gap between these two types of analysis. His sliding-block model is easy to apply and provides a useful index of co-seismic slope performance. Subsequent modifications to sliding-block analysis have made it applicable to a wider range of landslide types. Sliding-block analysis provides perhaps the greatest utility of all the types of analysis. It is far easier to apply than stress-deformation analysis, and it yields much more useful information than does pseudostatic analysis. ?? 2010.

  14. Increasing critical sensitivity of the Load/Unload Response Ratio before large earthquakes with identified stress accumulation pattern

    NASA Astrophysics Data System (ADS)

    Yu, Huai-zhong; Shen, Zheng-kang; Wan, Yong-ge; Zhu, Qing-yong; Yin, Xiang-chu

    2006-12-01

    The Load/Unload Response Ratio (LURR) method is proposed for short-to-intermediate-term earthquake prediction [Yin, X.C., Chen, X.Z., Song, Z.P., Yin, C., 1995. A New Approach to Earthquake Prediction — The Load/Unload Response Ratio (LURR) Theory, Pure Appl. Geophys., 145, 701-715]. This method is based on measuring the ratio between Benioff strains released during the time periods of loading and unloading, corresponding to the Coulomb Failure Stress change induced by Earth tides on optimally oriented faults. According to the method, the LURR time series usually climb to an anomalously high peak prior to occurrence of a large earthquake. Previous studies have indicated that the size of critical seismogenic region selected for LURR measurements has great influence on the evaluation of LURR. In this study, we replace the circular region usually adopted in LURR practice with an area within which the tectonic stress change would mostly affect the Coulomb stress on a potential seismogenic fault of a future event. The Coulomb stress change before a hypothetical earthquake is calculated based on a simple back-slip dislocation model of the event. This new algorithm, by combining the LURR method with our choice of identified area with increased Coulomb stress, is devised to improve the sensitivity of LURR to measure criticality of stress accumulation before a large earthquake. Retrospective tests of this algorithm on four large earthquakes occurred in California over the last two decades show remarkable enhancement of the LURR precursory anomalies. For some strong events of lesser magnitudes occurred in the same neighborhoods and during the same time periods, significant anomalies are found if circular areas are used, and are not found if increased Coulomb stress areas are used for LURR data selection. The unique feature of this algorithm may provide stronger constraints on forecasts of the size and location of future large events.

  15. The California Earthquake Advisory Plan: A history

    USGS Publications Warehouse

    Roeloffs, Evelyn A.; Goltz, James D.

    2017-01-01

    Since 1985, the California Office of Emergency Services (Cal OES) has issued advisory statements to local jurisdictions and the public following seismic activity that scientists on the California Earthquake Prediction Evaluation Council view as indicating elevated probability of a larger earthquake in the same area during the next several days. These advisory statements are motivated by statistical studies showing that about 5% of moderate earthquakes in California are followed by larger events within a 10-km, five-day space-time window (Jones, 1985; Agnew and Jones, 1991; Reasenberg and Jones, 1994). Cal OES issued four earthquake advisories from 1985 to 1989. In October, 1990, the California Earthquake Advisory Plan formalized this practice, and six Cal OES Advisories have been issued since then. This article describes that protocol’s scientific basis and evolution.

  16. Implications of next generation attenuation ground motion prediction equations for site coefficients used in earthquake resistant design

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2014-01-01

    Proposals are developed to update Tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures published as American Society of Civil Engineers Structural Engineering Institute standard 7-10 (ASCE/SEI 7–10). The updates are mean next generation attenuation (NGA) site coefficients inferred directly from the four NGA ground motion prediction equations used to derive the maximum considered earthquake response maps adopted in ASCE/SEI 7–10. Proposals include the recommendation to use straight-line interpolation to infer site coefficients at intermediate values of (average shear velocity to 30-m depth). The NGA coefficients are shown to agree well with adopted site coefficients at low levels of input motion (0.1 g) and those observed from the Loma Prieta earthquake. For higher levels of input motion, the majority of the adopted values are within the 95% epistemic-uncertainty limits implied by the NGA estimates with the exceptions being the mid-period site coefficient, Fv, for site class D and the short-period coefficient, Fa, for site class C, both of which are slightly less than the corresponding 95% limit. The NGA data base shows that the median value  of 913 m/s for site class B is more typical than 760 m/s as a value to characterize firm to hard rock sites as the uniform ground condition for future maximum considered earthquake response ground motion estimates. Future updates of NGA ground motion prediction equations can be incorporated easily into future adjustments of adopted site coefficients using procedures presented herein. 

  17. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  18. Earthquake and Tsunami booklet based on two Indonesia earthquakes

    NASA Astrophysics Data System (ADS)

    Hayashi, Y.; Aci, M.

    2014-12-01

    Many destructive earthquakes occurred during the last decade in Indonesia. These experiences are very important precepts for the world people who live in earthquake and tsunami countries. We are collecting the testimonies of tsunami survivors to clarify successful evacuation process and to make clear the characteristic physical behaviors of tsunami near coast. We research 2 tsunami events, 2004 Indian Ocean tsunami and 2010 Mentawai slow earthquake tsunami. Many video and photographs were taken by people at some places in 2004 Indian ocean tsunami disaster; nevertheless these were few restricted points. We didn't know the tsunami behavior in another place. In this study, we tried to collect extensive information about tsunami behavior not only in many places but also wide time range after the strong shake. In Mentawai case, the earthquake occurred in night, so there are no impressive photos. To collect detail information about evacuation process from tsunamis, we contrived the interview method. This method contains making pictures of tsunami experience from the scene of victims' stories. In 2004 Aceh case, all survivors didn't know tsunami phenomena. Because there were no big earthquakes with tsunami for one hundred years in Sumatra region, public people had no knowledge about tsunami. This situation was highly improved in 2010 Mentawai case. TV programs and NGO or governmental public education programs about tsunami evacuation are widespread in Indonesia. Many people know about fundamental knowledge of earthquake and tsunami disasters. We made drill book based on victim's stories and painted impressive scene of 2 events. We used the drill book in disaster education event in school committee of west Java. About 80 % students and teachers evaluated that the contents of the drill book are useful for correct understanding.

  19. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and

  20. An interdisciplinary approach to study Pre-Earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  1. Real-time neural network earthquake profile predictor

    DOEpatents

    Leach, R.R.; Dowla, F.U.

    1996-02-06

    A neural network has been developed that uses first-arrival energy to predict the characteristics of impending earthquake seismograph signals. The propagation of ground motion energy through the earth is a highly nonlinear function. This is due to different forms of ground motion as well as to changes in the elastic properties of the media throughout the propagation path. The neural network is trained using seismogram data from earthquakes. Presented with a previously unseen earthquake, the neural network produces a profile of the complete earthquake signal using data from the first seconds of the signal. This offers a significant advance in the real-time monitoring, warning, and subsequent hazard minimization of catastrophic ground motion. 17 figs.

  2. Real-time neural network earthquake profile predictor

    DOEpatents

    Leach, Richard R.; Dowla, Farid U.

    1996-01-01

    A neural network has been developed that uses first-arrival energy to predict the characteristics of impending earthquake seismograph signals. The propagation of ground motion energy through the earth is a highly nonlinear function. This is due to different forms of ground motion as well as to changes in the elastic properties of the media throughout the propagation path. The neural network is trained using seismogram data from earthquakes. Presented with a previously unseen earthquake, the neural network produces a profile of the complete earthquake signal using data from the first seconds of the signal. This offers a significant advance in the real-time monitoring, warning, and subsequent hazard minimization of catastrophic ground motion.

  3. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  4. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences

  5. The Key Role of Eyewitnesses in Rapid Impact Assessment of Global Earthquake

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.; Etivant, C.; Frobert, L.; Godey, S.

    2014-12-01

    Uncertainties in rapid impact assessments of global earthquakes are intrinsically large because they rely on 3 main elements (ground motion prediction models, building stock inventory and related vulnerability) which values and/or spatial variations are poorly constrained. Furthermore, variations of hypocentral location and magnitude within their respective uncertainty domain can lead to significantly different shaking level for centers of population and change the scope of the disaster. We present the strategy and methods implemented at the Euro-Med Seismological Centre (EMSC) to rapidly collect in-situ observations on earthquake effects from eyewitnesses for reducing uncertainties of rapid earthquake impact assessment. It comprises crowdsourced information (online questionnaires, pics) as well as information derived from real time analysis of web traffic (flashourcing technique), and more recently deployment of QCN (Quake Catcher Network) low cost sensors. We underline the importance of merging results of different methods to improve performances and reliability of collected data.We try to better understand and respond to public demands and expectations after earthquakes through improved information services and diversification of information tools (social networks, smartphone app., browsers adds-on…), which, in turn, drive more eyewitnesses to our services and improve data collection. We will notably present our LastQuake Twitter feed (Quakebot) and smartphone applications (IOs and android) which only report earthquakes that matter for the public and authorities, i.e. felt and damaging earthquakes identified thanks to citizen generated information.

  6. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  7. Sensing the earthquake

    NASA Astrophysics Data System (ADS)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  8. Prospectively Evaluating the Collaboratory for the Study of Earthquake Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria

    2016-04-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP

  9. Earthquake Fingerprints: Representing Earthquake Waveforms for Similarity-Based Detection

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2016-12-01

    New earthquake detection methods, such as Fingerprint and Similarity Thresholding (FAST), use fast approximate similarity search to identify similar waveforms in long-duration data without templates (Yoon et al. 2015). These methods have two key components: fingerprint extraction and an efficient search algorithm. Fingerprint extraction converts waveforms into fingerprints, compact signatures that represent short-duration waveforms for identification and search. Earthquakes are detected using an efficient indexing and search scheme, such as locality-sensitive hashing, that identifies similar waveforms in a fingerprint database. The quality of the search results, and thus the earthquake detection results, is strongly dependent on the fingerprinting scheme. Fingerprint extraction should map similar earthquake waveforms to similar waveform fingerprints to ensure a high detection rate, even under additive noise and small distortions. Additionally, fingerprints corresponding to noise intervals should have mutually dissimilar fingerprints to minimize false detections. In this work, we compare the performance of multiple fingerprint extraction approaches for the earthquake waveform similarity search problem. We apply existing audio fingerprinting (used in content-based audio identification systems) and time series indexing techniques and present modified versions that are specifically adapted for seismic data. We also explore data-driven fingerprinting approaches that can take advantage of labeled or unlabeled waveform data. For each fingerprinting approach we measure its ability to identify similar waveforms in a low signal-to-noise setting, and quantify the trade-off between true and false detection rates in the presence of persistent noise sources. We compare the performance using known event waveforms from eight independent stations in the Northern California Seismic Network.

  10. A comparison of classical and intelligent methods to detect potential thermal anomalies before the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4)

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-04-01

    In this paper, a number of classical and intelligent methods, including interquartile, autoregressive integrated moving average (ARIMA), artificial neural network (ANN) and support vector machine (SVM), have been proposed to quantify potential thermal anomalies around the time of the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4). The duration of the data set, which is comprised of Aqua-MODIS land surface temperature (LST) night-time snapshot images, is 62 days. In order to quantify variations of LST data obtained from satellite images, the air temperature (AT) data derived from the meteorological station close to the earthquake epicenter has been taken into account. For the models examined here, results indicate the following: (i) ARIMA models, which are the most widely used in the time series community for short-term forecasting, are quickly and easily implemented, and can efficiently act through linear solutions. (ii) A multilayer perceptron (MLP) feed-forward neural network can be a suitable non-parametric method to detect the anomalous changes of a non-linear time series such as variations of LST. (iii) Since SVMs are often used due to their many advantages for classification and regression tasks, it can be shown that, if the difference between the predicted value using the SVM method and the observed value exceeds the pre-defined threshold value, then the observed value could be regarded as an anomaly. (iv) ANN and SVM methods could be powerful tools in modeling complex phenomena such as earthquake precursor time series where we may not know what the underlying data generating process is. There is good agreement in the results obtained from the different methods for quantifying potential anomalies in a given LST time series. This paper indicates that the detection of the potential thermal anomalies derive credibility from the overall efficiencies and potentialities of the four integrated methods.

  11. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  12. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods

  13. Media exposure related to the 2008 Sichuan Earthquake predicted probable PTSD among Chinese adolescents in Kunming, China: A longitudinal study.

    PubMed

    Yeung, Nelson C Y; Lau, Joseph T F; Yu, Nancy Xiaonan; Zhang, Jianping; Xu, Zhening; Choi, Kai Chow; Zhang, Qi; Mak, Winnie W S; Lui, Wacy W S

    2018-03-01

    This study examined the prevalence and the psychosocial predictors of probable PTSD among Chinese adolescents in Kunming (approximately 444 miles from the epicenter), China, who were indirectly exposed to the Sichuan Earthquake in 2008. Using a longitudinal study design, primary and secondary school students (N = 3577) in Kunming completed questionnaires at baseline (June 2008) and 6 months afterward (December 2008) in classroom settings. Participants' exposure to earthquake-related imagery and content, perceptions and emotional reactions related to the earthquake, and posttraumatic stress symptoms were measured. Univariate and forward stepwise multivariable logistic regression models were fit to identify significant predictors of probable PTSD at the 6-month follow-up. Prevalences of probable PTSD (with a Children's Revised Impact of Event Scale score ≥30) among the participants at baseline and 6-month follow-up were 16.9% and 11.1% respectively. In the multivariable analysis, those who were frequently exposed to distressful imagery had experienced at least two types of negative life events, perceived that teachers were distressed due to the earthquake, believed that the earthquake resulted from damages to the ecosystem, and felt apprehensive and emotionally disturbed due to the earthquake reported a higher risk of probable PTSD at 6-month follow-up (all ps < .05). Exposure to distressful media images, emotional responses, and disaster-related perceptions at baseline were found to be predictive of probable PTSD several months after indirect exposure to the event. Parents, teachers, and the mass media should be aware of the negative impacts of disaster-related media exposure on adolescents' psychological health. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Probabilistic Tsunami Hazard Assessment along Nankai Trough (1) An assessment based on the information of the forthcoming earthquake that Earthquake Research Committee(2013) evaluated

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2015-12-01

    The Earthquake Research Committee(ERC)/HERP, Government of Japan (2013) revised their long-term evaluation of the forthcoming large earthquake along the Nankai Trough; the next earthquake is estimated M8 to 9 class, and the probability (P30) that the next earthquake will occur within the next 30 years (from Jan. 1, 2013) is 60% to 70%. In this study, we assess tsunami hazards (maximum coastal tsunami heights) in the near future, in terms of a probabilistic approach, from the next earthquake along Nankai Trough, on the basis of ERC(2013)'s report. The probabilistic tsunami hazard assessment that we applied is as follows; (1) Characterized earthquake fault models (CEFMs) are constructed on each of the 15 hypothetical source areas (HSA) that ERC(2013) showed. The characterization rule follows Toyama et al.(2015, JpGU). As results, we obtained total of 1441 CEFMs. (2) We calculate tsunamis due to CEFMs by solving nonlinear, finite-amplitude, long-wave equations with advection and bottom friction terms by finite-difference method. Run-up computation on land is included. (3) A time predictable model predicts the recurrent interval of the present seismic cycle is T=88.2 years (ERC,2013). We fix P30 = 67% by applying the renewal process based on BPT distribution with T and alpha=0.24 as its aperiodicity. (4) We divide the probability P30 into P30(i) for i-th subgroup consisting of the earthquakes occurring in each of 15 HSA by following a probability re-distribution concept (ERC,2014). Then each earthquake (CEFM) in i-th subgroup is assigned a probability P30(i)/N where N is the number of CEFMs in each sub-group. Note that such re-distribution concept of the probability is nothing but tentative because the present seismology cannot give deep knowledge enough to do it. Epistemic logic-tree approach may be required in future. (5) We synthesize a number of tsunami hazard curves at every evaluation points on coasts by integrating the information about 30 years occurrence

  15. A seismoacoustic study of the 2011 January 3 Circleville earthquake

    NASA Astrophysics Data System (ADS)

    Arrowsmith, Stephen J.; Burlacu, Relu; Pankow, Kristine; Stump, Brian; Stead, Richard; Whitaker, Rod; Hayward, Chris

    2012-05-01

    We report on a unique set of infrasound observations from a single earthquake, the 2011 January 3 Circleville earthquake (Mw 4.7, depth of 8 km), which was recorded by nine infrasound arrays in Utah. Based on an analysis of the signal arrival times and backazimuths at each array, we find that the infrasound arrivals at six arrays can be associated to the same source and that the source location is consistent with the earthquake epicentre. Results of propagation modelling indicate that the lack of associated arrivals at the remaining three arrays is due to path effects. Based on these findings we form the working hypothesis that the infrasound is generated by body waves causing the epicentral region to pump the atmosphere, akin to a baffled piston. To test this hypothesis, we have developed a numerical seismoacoustic model to simulate the generation of epicentral infrasound from earthquakes. We model the generation of seismic waves using a 3-D finite difference algorithm that accounts for the earthquake moment tensor, source time function, depth and local geology. The resultant acceleration-time histories on a 2-D grid at the surface then provide the initial conditions for modelling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field source pressure through the Ground-to-Space atmospheric model using a time-domain Parabolic Equation technique. By comparing the resultant predictions with the six epicentral infrasound observations from the 2011 January 3, Circleville earthquake, we show that the observations agree well with our predictions. The predicted and observed amplitudes are within a factor of 2 (on average, the synthetic amplitudes are a factor of 1.6 larger than the observed amplitudes). In addition, arrivals are predicted at all six arrays where signals are observed, and importantly not predicted at the remaining three arrays. Durations are typically predicted to within a factor of 2, and in some cases

  16. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    NASA Astrophysics Data System (ADS)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    getting a hit is N%" or "the probability of an earthquake is N%" involves specifying the assumptions made. Different plausible assumptions yield a wide range of estimates. In both seismology and sports, how to better predict future performance remains an important question.

  17. Long-Term Impact of Earthquakes on Sleep Quality

    PubMed Central

    Tempesta, Daniela; Curcio, Giuseppe; De Gennaro, Luigi; Ferrara, Michele

    2013-01-01

    Purpose We investigated the impact of the 6.3 magnitude 2009 L’Aquila (Italy) earthquake on standardized self-report measures of sleep quality (Pittsburgh Sleep Quality Index, PSQI) and frequency of disruptive nocturnal behaviours (Pittsburgh Sleep Quality Index-Addendum, PSQI-A) two years after the natural disaster. Methods Self-reported sleep quality was assessed in 665 L’Aquila citizens exposed to the earthquake compared with a different sample (n = 754) of L'Aquila citizens tested 24 months before the earthquake. In addition, sleep quality and disruptive nocturnal behaviours (DNB) of people exposed to the traumatic experience were compared with people that in the same period lived in different areas ranging between 40 and 115 km from the earthquake epicenter (n = 3574). Results The comparison between L’Aquila citizens before and after the earthquake showed a significant deterioration of sleep quality after the exposure to the trauma. In addition, two years after the earthquake L'Aquila citizens showed the highest PSQI scores and the highest incidence of DNB compared to subjects living in the surroundings. Interestingly, above-the-threshold PSQI scores were found in the participants living within 70 km from the epicenter, while trauma-related DNBs were found in people living in a range of 40 km. Multiple regressions confirmed that proximity to the epicenter is predictive of sleep disturbances and DNB, also suggesting a possible mediating effect of depression on PSQI scores. Conclusions The psychological effects of an earthquake may be much more pervasive and long-lasting of its building destruction, lasting for years and involving a much larger population. A reduced sleep quality and an increased frequency of DNB after two years may be a risk factor for the development of depression and posttraumatic stress disorder. PMID:23418478

  18. Predictability of catastrophic events: Material rupture, earthquakes, turbulence, financial crashes, and human birth

    PubMed Central

    Sornette, Didier

    2002-01-01

    We propose that catastrophic events are “outliers” with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. We describe a unifying approach for modeling and predicting these catastrophic events or “ruptures,” that is, sudden transitions from a quiescent state to a crisis. Such ruptures involve interactions between structures at many different scales. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence, and abrupt changes of weather regimes, financial crashes, and human parturition (birth). Future improvements will involve combining ideas and tools from statistical physics and artificial/computational intelligence, to identify and classify possible universal structures that occur at different scales, and to develop application-specific methodologies to use these structures for prediction of the “crises” known to arise in each application of interest. We live on a planet and in a society with intermittent dynamics rather than a state of equilibrium, and so there is a growing and urgent need to sensitize students and citizens to the importance and impacts of ruptures in their multiple forms. PMID:11875205

  19. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  20. Simulating Earthquakes for Science and Society: New Earthquake Visualizations Ideal for Use in Science Communication

    NASA Astrophysics Data System (ADS)

    de Groot, R. M.; Benthien, M. L.

    2006-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  1. Interevent times in a new alarm-based earthquake forecasting model

    NASA Astrophysics Data System (ADS)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    occurrence region of the 2011 Mw 9.0 Tohoku earthquake, whereas the RI method did not. Cases where a period of quiescent seismicity occurred before the target event often lead to low MR scores, meaning that the target event was not predicted and indicating that our model could be further improved by taking into account quiescent periods in the alarm strategy.

  2. A Trial for Earthquake Prediction by Precise Monitoring of Deep Ground Water Temperature

    NASA Astrophysics Data System (ADS)

    Nasuhara, Y.; Otsuki, K.; Yamauchi, T.

    2006-12-01

    A near future large earthquake is estimated to occur off Miyagi prefecture, northeast Japan within 20 years at a probability of about 80 %. In order to predict this earthquake, we have observed groundwater temperature in a borehole at Sendai city 100 km west of the asperity. This borehole penetrates the fault zone of NE-trending active reverse fault, Nagamachi-Rifu fault zone, at 820m depth. Our concept of the ground water observation is that fault zones are natural amplifier of crustal strain, and hence at 820m depth we set a very precise quartz temperature sensor with the resolution of 0.0002 deg. C. We confirmed our observation system to work normally by both the pumping up tests and the systematic temperature changes at different depths. Since the observation started on June 20 in 2004, we found mysterious intermittent temperature fluctuations of two types; one is of a period of 5-10 days and an amplitude of ca. 0.1 deg. C, and the other is of a period of 11-21 days and an amplitude of ca. 0.2 deg. C. Based on the examination using the product of Grashof number and Prantl number, natural convection of water can be occurred in the borehole. However, since these temperature fluctuations are observed only at the depth around 820 m, thus it is likely that they represent the hydrological natures proper to the Nagamachi-Rifu fault zone. It is noteworthy that the small temperature changes correlatable with earth tide are superposed on the long term and large amplitude fluctuations. The amplitude on the days of the full moon and new moon is ca. 0.001 deg. C. The bottoms of these temperature fluctuations always delay about 6 hours relative to peaks of earth tide. This is interpreted as that water in the borehole is sucked into the fault zone on which tensional normal stress acts on the days of the full moon and new moon. The amplitude of the crustal strain by earth tide was measured at ca. 2∗10^-8 strain near our observation site. High frequency temperature noise of

  3. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  4. Critical behavior in earthquake energy dissipation

    NASA Astrophysics Data System (ADS)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  5. The Application of Speaker Recognition Techniques in the Detection of Tsunamigenic Earthquakes

    NASA Astrophysics Data System (ADS)

    Gorbatov, A.; O'Connell, J.; Paliwal, K.

    2015-12-01

    Tsunami warning procedures adopted by national tsunami warning centres largely rely on the classical approach of earthquake location, magnitude determination, and the consequent modelling of tsunami waves. Although this approach is based on known physics theories of earthquake and tsunami generation processes, this may be the main shortcoming due to the need to satisfy minimum seismic data requirement to estimate those physical parameters. At least four seismic stations are necessary to locate the earthquake and a minimum of approximately 10 minutes of seismic waveform observation to reliably estimate the magnitude of a large earthquake similar to the 2004 Indian Ocean Tsunami Earthquake of M9.2. Consequently the total time to tsunami warning could be more than half an hour. In attempt to reduce the time of tsunami alert a new approach is proposed based on the classification of tsunamigenic and non tsunamigenic earthquakes using speaker recognition techniques. A Tsunamigenic Dataset (TGDS) was compiled to promote the development of machine learning techniques for application to seismic trace analysis and, in particular, tsunamigenic event detection, and compare them to existing seismological methods. The TGDS contains 227 off shore events (87 tsunamigenic and 140 non-tsunamigenic earthquakes with M≥6) from Jan 2000 to Dec 2011, inclusive. A Support Vector Machine classifier using a radial-basis function kernel was applied to spectral features derived from 400 sec frames of 3-comp. 1-Hz broadband seismometer data. Ten-fold cross-validation was used during training to choose classifier parameters. Voting was applied to the classifier predictions provided from each station to form an overall prediction for an event. The F1 score (harmonic mean of precision and recall) was chosen to rate each classifier as it provides a compromise between type-I and type-II errors, and due to the imbalance between the representative number of events in the tsunamigenic and non

  6. Application of a long-range forecasting model to earthquakes in the Japan mainland testing region

    NASA Astrophysics Data System (ADS)

    Rhoades, David A.

    2011-03-01

    The Every Earthquake a Precursor According to Scale (EEPAS) model is a long-range forecasting method which has been previously applied to a number of regions, including Japan. The Collaboratory for the Study of Earthquake Predictability (CSEP) forecasting experiment in Japan provides an opportunity to test the model at lower magnitudes than previously and to compare it with other competing models. The model sums contributions to the rate density from past earthquakes based on predictive scaling relations derived from the precursory scale increase phenomenon. Two features of the earthquake catalogue in the Japan mainland region create difficulties in applying the model, namely magnitude-dependence in the proportion of aftershocks and in the Gutenberg-Richter b-value. To accommodate these features, the model was fitted separately to earthquakes in three different target magnitude classes over the period 2000-2009. There are some substantial unexplained differences in parameters between classes, but the time and magnitude distributions of the individual earthquake contributions are such that the model is suitable for three-month testing at M ≥ 4 and for one-year testing at M ≥ 5. In retrospective analyses, the mean probability gain of the EEPAS model over a spatially smoothed seismicity model increases with magnitude. The same trend is expected in prospective testing. The Proximity to Past Earthquakes (PPE) model has been submitted to the same testing classes as the EEPAS model. Its role is that of a spatially-smoothed reference model, against which the performance of time-varying models can be compared.

  7. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  8. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  9. Energy Partition and Variability of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kanamori, H.

    2003-12-01

    During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy

  10. Perception of earthquake risk in Taiwan: effects of gender and past earthquake experience.

    PubMed

    Kung, Yi-Wen; Chen, Sue-Huei

    2012-09-01

    This study explored how individuals in Taiwan perceive the risk of earthquake and the relationship of past earthquake experience and gender to risk perception. Participants (n= 1,405), including earthquake survivors and those in the general population without prior direct earthquake exposure, were selected and interviewed through a computer-assisted telephone interviewing procedure using a random sampling and stratification method covering all 24 regions of Taiwan. A factor analysis of the interview data yielded a two-factor structure of risk perception in regard to earthquake. The first factor, "personal impact," encompassed perception of threat and fear related to earthquakes. The second factor, "controllability," encompassed a sense of efficacy of self-protection in regard to earthquakes. The findings indicated prior earthquake survivors and females reported higher scores on the personal impact factor than males and those with no prior direct earthquake experience, although there were no group differences on the controllability factor. The findings support that risk perception has multiple components, and suggest that past experience (survivor status) and gender (female) affect the perception of risk. Exploration of potential contributions of other demographic factors such as age, education, and marital status to personal impact, especially for females and survivors, is discussed. Future research on and intervention program with regard to risk perception are suggested accordingly. © 2012 Society for Risk Analysis.

  11. a New Quantitative Method for the Rapid Evaluation of Buildings against Earthquakes

    NASA Astrophysics Data System (ADS)

    Mahmoodzadeh, Amir; Mazaheri, Mohammad Mehdi

    2008-07-01

    At the present time there exist numerous weak buildings which are not able to withstand earthquakes. At the same time, both private and public developers are trying to use scientific methods to prioritize and allocate budget in order to reinforce the above mentioned structures. This is because of the limited financial resources and time. In the recent years the procedure of seismic assessment before rehabilitation of vulnerable buildings has been implemented in many countries. Now, it seems logical to reinforce the existing procedures with the mass of available data about the effects caused by earthquakes on buildings. The main idea is driven from FMEA (Failure Mode and Effect Analysis) in quality management where the main procedure is to recognize the failure, the causes, and the priority of each cause and failure. Specifying the causes and effects which lead to a certain shortcoming in structural behavior during earthquakes, an inventory is developed and each building is rated through a yes-or-no procedure. In this way, the rating of the structure is based on some standard forms which along with relative weights are developed in this study. The resulted criteria by rapid assessment will indicate whether the structure is to be demolished, has a high, medium or low vulnerability or is invulnerable.

  12. A prototype of the procedure of strong ground motion prediction for intraslab earthquake based on characterized source model

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Sekiguchi, H.

    2011-12-01

    We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.

  13. Continuous borehole strain and pore pressure in the near field of the 28 September 2004 M 6.0 parkfield, California, earthquake: Implications for nucleation, fault response, earthquake prediction and tremor

    USGS Publications Warehouse

    Johnston, M.J.S.; Borcherdt, R.D.; Linde, A.T.; Gladwin, M.T.

    2006-01-01

    Near-field observations of high-precision borehole strain and pore pressure, show no indication of coherent accelerating strain or pore pressure during the weeks to seconds before the 28 September 2004 M 6.0 Parkfield earthquake. Minor changes in strain rate did occur at a few sites during the last 24 hr before the earthquake but these changes are neither significant nor have the form expected for strain during slip coalescence initiating fault failure. Seconds before the event, strain is stable at the 10-11 level. Final prerupture nucleation slip in the hypocentral region is constrained to have a moment less than 2 ?? 1012 N m (M 2.2) and a source size less than 30 m. Ground displacement data indicate similar constraints. Localized rupture nucleation and runaway precludes useful prediction of damaging earthquakes. Coseismic dynamic strains of about 10 microstrain peak-to-peak were superimposed on volumetric strain offsets of about 0.5 microstrain to the northwest of the epicenter and about 0.2 microstrain to the southeast of the epicenter, consistent with right lateral slip. Observed strain and Global Positioning System (GPS) offsets can be simply fit with 20 cm of slip between 4 and 10 km on a 20-km segment of the fault north of Gold Hill (M0 = 7 ?? 1017 N m). Variable slip inversion models using GPS data and seismic data indicate similar moments. Observed postseismic strain is 60% to 300% of the coseismic strain, indicating incomplete release of accumulated strain. No measurable change in fault zone compliance preceding or following the earthquake is indicated by stable earth tidal response. No indications of strain change accompany nonvolcanic tremor events reported prior to and following the earthquake.

  14. Stress drop variation of M > 4 earthquakes on the Blanco oceanic transform fault using a phase coherence method

    NASA Astrophysics Data System (ADS)

    Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.

    2017-12-01

    Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of

  15. Seismogeodesy for rapid earthquake and tsunami characterization

    NASA Astrophysics Data System (ADS)

    Bock, Y.

    2016-12-01

    Rapid estimation of earthquake magnitude and fault mechanism is critical for earthquake and tsunami warning systems. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of earthquake onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict earthquake early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of

  16. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event

  17. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  18. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  19. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  20. Ground Motions Due to Earthquakes on Creeping Faults

    NASA Astrophysics Data System (ADS)

    Harris, R.; Abrahamson, N. A.

    2014-12-01

    We investigate the peak ground motions from the largest well-recorded earthquakes on creeping strike-slip faults in active-tectonic continental regions. Our goal is to evaluate if the strong ground motions from earthquakes on creeping faults are smaller than the strong ground motions from earthquakes on locked faults. Smaller ground motions might be expected from earthquakes on creeping faults if the fault sections that strongly radiate energy are surrounded by patches of fault that predominantly absorb energy. For our study we used the ground motion data available in the PEER NGA-West2 database, and the ground motion prediction equations that were developed from the PEER NGA-West2 dataset. We analyzed data for the eleven largest well-recorded creeping-fault earthquakes, that ranged in magnitude from M5.0-6.5. Our findings are that these earthquakes produced peak ground motions that are statistically indistinguishable from the peak ground motions produced by similar-magnitude earthquakes on locked faults. These findings may be implemented in earthquake hazard estimates for moderate-size earthquakes in creeping-fault regions. Further investigation is necessary to determine if this result will also apply to larger earthquakes on creeping faults. Please also see: Harris, R.A., and N.A. Abrahamson (2014), Strong ground motions generated by earthquakes on creeping faults, Geophysical Research Letters, vol. 41, doi:10.1002/2014GL060228.

  1. Methodology to determine the parameters of historical earthquakes in China

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Lin, Guoliang; Zhang, Zhe

    2017-12-01

    China is one of the countries with the longest cultural tradition. Meanwhile, China has been suffering very heavy earthquake disasters; so, there are abundant earthquake recordings. In this paper, we try to sketch out historical earthquake sources and research achievements in China. We will introduce some basic information about the collections of historical earthquake sources, establishing intensity scale and the editions of historical earthquake catalogues. Spatial-temporal and magnitude distributions of historical earthquake are analyzed briefly. Besides traditional methods, we also illustrate a new approach to amend the parameters of historical earthquakes or even identify candidate zones for large historical or palaeo-earthquakes. In the new method, a relationship between instrumentally recorded small earthquakes and strong historical earthquakes is built up. Abundant historical earthquake sources and the achievements of historical earthquake research in China are of valuable cultural heritage in the world.

  2. Dynamic strains for earthquake source characterization

    USGS Publications Warehouse

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  3. Simulating and analyzing engineering parameters of Kyushu Earthquake, Japan, 1997, by empirical Green function method

    NASA Astrophysics Data System (ADS)

    Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei

    2017-03-01

    Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.

  4. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes.

    PubMed

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

  5. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  6. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  7. An autocorrelation method to detect low frequency earthquakes within tremor

    USGS Publications Warehouse

    Brown, J.R.; Beroza, G.C.; Shelly, D.R.

    2008-01-01

    Recent studies have shown that deep tremor in the Nankai Trough under western Shikoku consists of a swarm of low frequency earthquakes (LFEs) that occur as slow shear slip on the down-dip extension of the primary seismogenic zone of the plate interface. The similarity of tremor in other locations suggests a similar mechanism, but the absence of cataloged low frequency earthquakes prevents a similar analysis. In this study, we develop a method for identifying LFEs within tremor. The method employs a matched-filter algorithm, similar to the technique used to infer that tremor in parts of Shikoku is comprised of LFEs; however, in this case we do not assume the origin times or locations of any LFEs a priori. We search for LFEs using the running autocorrelation of tremor waveforms for 6 Hi-Net stations in the vicinity of the tremor source. Time lags showing strong similarity in the autocorrelation represent either repeats, or near repeats, of LFEs within the tremor. We test the method on an hour of Hi-Net recordings of tremor and demonstrates that it extracts both known and previously unidentified LFEs. Once identified, we cross correlate waveforms to measure relative arrival times and locate the LFEs. The results are able to explain most of the tremor as a swarm of LFEs and the locations of newly identified events appear to fill a gap in the spatial distribution of known LFEs. This method should allow us to extend the analysis of Shelly et al. (2007a) to parts of the Nankai Trough in Shikoku that have sparse LFE coverage, and may also allow us to extend our analysis to other regions that experience deep tremor, but where LFEs have not yet been identified. Copyright 2008 by the American Geophysical Union.

  8. The spectral cell method in nonlinear earthquake modeling

    NASA Astrophysics Data System (ADS)

    Giraldo, Daniel; Restrepo, Doriam

    2017-12-01

    This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.

  9. Attenuation Characteristics of Strong Motions during the 2016 Kumamoto Earthquakes including Near-Field Records

    NASA Astrophysics Data System (ADS)

    Si, H.; Koketsu, K.; Miyake, H.; Ibrahim, R.

    2016-12-01

    During the two major earthquakes occurred in Kumamoto prefecture, at 21:26 on 14 April, 2016 (Mw 6.2, GCMT), and at 1:25 on 16 April, 2016 (Mw7.0, GCMT), a large number of strong ground motions were recorded, including those very close to the surface fault. In this study, we will discuss the attenuation characteristics of strong ground motions observed during the earthquakes. The data used in this study are mainly observed by K-NET, KiK-net, Osaka University, JMA and Kumamoto prefecture. The 5% damped acceleration response spectra (GMRotI50) are calculated based on the method proposed by Boore et al. (2006). PGA and PGV is defined as the larger one among the PGAs and PGVs of two horizontal components. The PGA, PGV, and GMRotI50 data were corrected to the bedrock with Vs of 1.5km/s based on the method proposed by Si et al. (2016) using the average shear wave velocity (Vs30) and the thickness of sediments over the bedrock. The thickness is estimated based on the velocity structure model provided by J-SHIS. We use a source model proposed by Koketsu et al. (2016) to calculate the fault distance and the median distance (MED) which defined as the closest distance from a station to the median line of the fault plane (Si et al., 2014). We compared the observed PGAs, PGVs, and GMRotI50 with the GMPEs developed in Japan using MED (Si et al., 2014). The predictions by the GMPEs are generally consistent with the observations during the two Kumamoto earthquakes. The results of the comparison also indicated that, (1) strong motion records from the earthquake on April 14th are generally consistent with the predictions by GMPE, however, at the periods of 0.5 to 2 seconds, several records close to the fault plane show larger amplitudes than the predictions by GMPE, including the KiK-net station Mashiki (KMMH16); (2) for the earthquake on April 16, the PGAs and GMRotI50 at periods from 0.1s to 0.4s with short distance from the fault plane are slightly smaller than the predictions by

  10. Instrumental intensity distribution for the Hector Mine, California, and the Chi-Chi, Taiwan, earthquakes: Comparison of two methods

    USGS Publications Warehouse

    Sokolov, V.; Wald, D.J.

    2002-01-01

    We compare two methods of seismic-intensity estimation from ground-motion records for the two recent strong earthquakes: the 1999 (M 7.1) Hector Mine, California, and the 1999 (M 7.6) Chi-Chi, Taiwan. The first technique utilizes the peak ground acceleration (PGA) and velocity (PGV), and it is used for rapid generation of the instrumental intensity map in California. The other method is based on the revised relationships between intensity and Fourier amplitude spectrum (FAS). The results of using the methods are compared with independently observed data and between the estimations from the records. For the case of the Hector Mine earthquake, the calculated intensities in general agree with the observed values. For the case of the Chi-Chi earthquake, the areas of maximum calculated intensity correspond to the areas of the greatest damage and highest number of fatalities. However, the FAS method producees higher-intensity values than those of the peak amplitude method. The specific features of ground-motion excitation during the large, shallow, thrust earthquake may be considered a reason for the discrepancy. The use of PGA and PGV is simple; however, the use of FAS provides a natural consideration of site amplification by means of generalized or site-specific spectral ratios. Because the calculation of seismic-intensity maps requires rapid processing of data from a large network, it is very practical to generate a "first-order" map from the recorded peak motions. Then, a "second-order" map may be compiled using an amplitude-spectra method on the basis of available records and numerical modeling of the site-dependent spectra for the regions of sparse station spacing.

  11. Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection

    2011-12-01

    Operational earthquake forecasting (OEF) is the dissemination of authoritative information about the time dependence of seismic hazards to help communities prepare for potentially destructive earthquakes. Most previous work on the public utility of OEF has anticipated that forecasts would deliver high probabilities of large earthquakes; i.e., deterministic predictions with low error rates (false alarms and failures-to-predict) would be possible. This expectation has not been realized. An alternative to deterministic prediction is probabilistic forecasting based on empirical statistical models of aftershock triggering and seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains in excess of 100 relative to long-term forecasts. The utility of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing OEF in this sort of "low-probability environment." The need to move more quickly has been underscored by recent seismic crises, such as the 2009 L'Aquila earthquake sequence, in which an anxious public was confused by informal and inaccurate earthquake predictions. After the L'Aquila earthquake, the Italian Department of Civil Protection appointed an International Commission on Earthquake Forecasting (ICEF), which I chaired, to recommend guidelines for OEF utilization. Our report (Ann. Geophys., 54, 4, 2011; doi: 10.4401/ag-5350) concludes: (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and need to convey epistemic uncertainties. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. (c) All operational models should be evaluated

  12. Historical earthquake research in Austria

    NASA Astrophysics Data System (ADS)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  13. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  14. Simple Physical Model for the Probability of a Subduction- Zone Earthquake Following Slow Slip Events and Earthquakes: Application to the Hikurangi Megathrust, New Zealand

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.

    2018-05-01

    Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.

  15. Pre-seismic anomalies in remotely sensed land surface temperature measurements: The case study of 2003 Boumerdes earthquake

    NASA Astrophysics Data System (ADS)

    Bellaoui, Mebrouk; Hassini, Abdelatif; Bouchouicha, Kada

    2017-05-01

    Detection of thermal anomaly prior to earthquake events has been widely confirmed by researchers over the past decade. One of the popular approaches for anomaly detection is the Robust Satellite Approach (RST). In this paper, we use this method on a collection of six years of MODIS satellite data, representing land surface temperature (LST) images to predict 21st May 2003 Boumerdes Algeria earthquake. The thermal anomalies results were compared with the ambient temperature variation measured in three meteorological stations of Algerian National Office of Meteorology (ONM) (DELLYS-AFIR, TIZI-OUZOU, and DAR-EL-BEIDA). The results confirm the importance of RST as an approach highly effective for monitoring the earthquakes.

  16. The 2015 Illapel earthquake, central Chile: A type case for a characteristic earthquake?

    NASA Astrophysics Data System (ADS)

    Tilmann, F.; Zhang, Y.; Moreno, M.; Saul, J.; Eckelmann, F.; Palo, M.; Deng, Z.; Babeyko, A.; Chen, K.; Baez, J. C.; Schurr, B.; Wang, R.; Dahm, T.

    2016-01-01

    On 16 September 2015, the MW = 8.2 Illapel megathrust earthquake ruptured the Central Chilean margin. Combining inversions of displacement measurements and seismic waveforms with high frequency (HF) teleseismic backprojection, we derive a comprehensive description of the rupture, which also predicts deep ocean tsunami wave heights. We further determine moment tensors and obtain accurate depth estimates for the aftershock sequence. The earthquake nucleated near the coast but then propagated to the north and updip, attaining a peak slip of 5-6 m. In contrast, HF seismic radiation is mostly emitted downdip of the region of intense slip and arrests earlier than the long period rupture, indicating smooth slip along the shallow plate interface in the final phase. A superficially similar earthquake in 1943 with a similar aftershock zone had a much shorter source time function, which matches the duration of HF seismic radiation in the recent event, indicating that the 1943 event lacked the shallow slip.

  17. Update of the Graizer-Kalkan ground-motion prediction equations for shallow crustal continental earthquakes

    USGS Publications Warehouse

    Graizer, Vladimir; Kalkan, Erol

    2015-01-01

    A ground-motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration and 5-percent damped pseudo spectral acceleration response ordinates of maximum horizontal component of randomly oriented ground motions was developed by Graizer and Kalkan (2007, 2009) to be used for seismic hazard analyses and engineering applications. This GMPE was derived from the greatly expanded Next Generation of Attenuation (NGA)-West1 database. In this study, Graizer and Kalkan’s GMPE is revised to include (1) an anelastic attenuation term as a function of quality factor (Q0) in order to capture regional differences in large-distance attenuation and (2) a new frequency-dependent sedimentary-basin scaling term as a function of depth to the 1.5-km/s shear-wave velocity isosurface to improve ground-motion predictions for sites on deep sedimentary basins. The new model (GK15), developed to be simple, is applicable to the western United States and other regions with shallow continental crust in active tectonic environments and may be used for earthquakes with moment magnitudes 5.0–8.0, distances 0–250 km, average shear-wave velocities 200–1,300 m/s, and spectral periods 0.01–5 s. Directivity effects are not explicitly modeled but are included through the variability of the data. Our aleatory variability model captures inter-event variability, which decreases with magnitude and increases with distance. The mixed-effects residuals analysis shows that the GK15 reveals no trend with respect to the independent parameters. The GK15 is a significant improvement over Graizer and Kalkan (2007, 2009), and provides a demonstrable, reliable description of ground-motion amplitudes recorded from shallow crustal earthquakes in active tectonic regions over a wide range of magnitudes, distances, and site conditions.

  18. Comparative study of earthquake-related and non-earthquake-related head traumas using multidetector computed tomography

    PubMed Central

    Chu, Zhi-gang; Yang, Zhi-gang; Dong, Zhi-hui; Chen, Tian-wu; Zhu, Zhi-yu; Shao, Heng

    2011-01-01

    OBJECTIVE: The features of earthquake-related head injuries may be different from those of injuries obtained in daily life because of differences in circumstances. We aim to compare the features of head traumas caused by the Sichuan earthquake with those of other common head traumas using multidetector computed tomography. METHODS: In total, 221 patients with earthquake-related head traumas (the earthquake group) and 221 patients with other common head traumas (the non-earthquake group) were enrolled in our study, and their computed tomographic findings were compared. We focused the differences between fractures and intracranial injuries and the relationships between extracranial and intracranial injuries. RESULTS: More earthquake-related cases had only extracranial soft tissue injuries (50.7% vs. 26.2%, RR = 1.9), and fewer cases had intracranial injuries (17.2% vs. 50.7%, RR = 0.3) compared with the non-earthquake group. For patients with fractures and intracranial injuries, there were fewer cases with craniocerebral injuries in the earthquake group (60.6% vs. 77.9%, RR = 0.8), and the earthquake-injured patients had fewer fractures and intracranial injuries overall (1.5±0.9 vs. 2.5±1.8; 1.3±0.5 vs. 2.1±1.1). Compared with the non-earthquake group, the incidences of soft tissue injuries and cranial fractures combined with intracranial injuries in the earthquake group were significantly lower (9.8% vs. 43.7%, RR = 0.2; 35.1% vs. 82.2%, RR = 0.4). CONCLUSION: As depicted with computed tomography, the severity of earthquake-related head traumas in survivors was milder, and isolated extracranial injuries were more common in earthquake-related head traumas than in non-earthquake-related injuries, which may have been the result of different injury causes, mechanisms and settings. PMID:22012045

  19. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    . The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

  20. Satellite Relay Telemetry of Seismic Data in Earthquake Prediction and Control

    NASA Technical Reports Server (NTRS)

    Jackson, W. H.; Eaton, J. P.

    1971-01-01

    The Satellite Telemetry Earthquake Monitoring Program was started to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project. The principal advantages of the satellite relay system over commercial telephone or microwave systems were: (1) it could be made less prone to massive failure during a major earthquake; (2) it could be extended readily into undeveloped regions; and (3) it could provide flexible, uniform communications over large sections of major global tectonic zones. Fundamental characteristics of a communications system to cope with the large volume of raw data collected by a short-period seismograph network are discussed.

  1. Reply to “Earthquake prediction evaluation standards applied to the VAN Method,” by D. D. Jackson

    NASA Astrophysics Data System (ADS)

    Varotsos, P.; Lazaridou, M.; Hadjicontis, V.

    Our earlier publications show that VAN method does not fail requirements (1) and (2) suggested by Jackson [1996]. No subjective ex-post facto decission was necessary for the evaluation of the success because, for the large majority of VAN predictions, the values of ΔM, Δr and Δt were published before the period 1987-1989 under discussion; in a few cases only (three out of 29), related with the observation of the new phenomenon of the SES electrical activity, the value of Δt was determined in 1988. Furthermore, a careful inspection-from physical point of view-shows that the three plausibility criteria, suggested by Jackson (to be obeyed by a candidate prediction technique), are actually met by VAN-method.

  2. A new algorithm to detect earthquakes outside the seismic network: preliminary results

    NASA Astrophysics Data System (ADS)

    Giudicepietro, Flora; Esposito, Antonietta Maria; Ricciolino, Patrizia

    2017-04-01

    In this text we are going to present a new technique for detecting earthquakes outside the seismic network, which are often the cause of fault of automatic analysis system. Our goal is to develop a robust method that provides the discrimination result as quickly as possible. We discriminate local earthquakes from regional earthquakes, both recorded at SGG station, equipped with short period sensors, operated by Osservatorio Vesuviano (INGV) in the Southern Apennines (Italy). The technique uses a Multi Layer Perceptron (MLP) neural network with an architecture composed by an input layer, a hidden layer and a single node output layer. We pre-processed the data using the Linear Predictive Coding (LPC) technique to extract the spectral features of the signals in a compact form. We performed several experiments by shortening the signal window length. In particular, we used windows of 4, 2 and 1 seconds containing the onset of the local and the regional earthquakes. We used a dataset of 103 local earthquakes and 79 regional earthquakes, most of which occurred in Greece, Albania and Crete. We split the dataset into a training set, for the network training, and a testing set to evaluate the network's capacity of discrimination. In order to assess the network stability, we repeated this procedure six times, randomly changing the data composition of the training and testing set and the initial weights of the net. We estimated the performance of this method by calculating the average of correct detection percentages obtained for each of the six permutations. The average performances are 99.02%, 98.04% and 98.53%, which concern respectively the experiments carried out on 4, 2 and 1 seconds signal windows. The results show that our method is able to recognize the earthquakes outside the seismic network using only the first second of the seismic records, with a suitable percentage of correct detection. Therefore, this algorithm can be profitably used to make earthquake automatic

  3. Classification of Earthquake-triggered Landslide Events - Review of Classical and Particular Cases

    NASA Astrophysics Data System (ADS)

    Braun, A.; Havenith, H. B.; Schlögel, R.

    2016-12-01

    Seismically induced landslides often contribute to a significant degree to the losses related to earthquakes. The identification of possible extends of landslide affected areas can help to target emergency measures when an earthquake occurs or improve the resilience of inhabited areas and critical infrastructure in zones of high seismic hazard. Moreover, landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes in paleoseismic studies, allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. Inspired by classical reviews of earthquake induced landslides, e.g. by Keefer or Jibson, we present here a review of factors contributing to earthquake triggered slope failures based on an `event-by-event' classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, `Intensity', `Fault', `Topographic energy', `Climatic conditions' and `Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be crosschecked. We present cases where our prediction model performs well and discuss particular cases

  4. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency

  5. Repeating Earthquakes Following an Mw 4.4 Earthquake Near Luther, Oklahoma

    NASA Astrophysics Data System (ADS)

    Clements, T.; Keranen, K. M.; Savage, H. M.

    2015-12-01

    An Mw 4.4 earthquake on April 16, 2013 near Luther, OK was one of the earliest M4+ earthquakes in central Oklahoma, following the Prague sequence in 2011. A network of four local broadband seismometers deployed within a day of the Mw 4.4 event, along with six Oklahoma netquake stations, recorded more than 500 aftershocks in the two weeks following the Luther earthquake. Here we use HypoDD (Waldhauser & Ellsworth, 2000) and waveform cross-correlation to obtain precise aftershock locations. The location uncertainty, calculated using the SVD method in HypoDD, is ~15 m horizontally and ~ 35 m vertically. The earthquakes define a near vertical, NE-SW striking fault plane. Events occur at depths from 2 km to 3.5 km within the granitic basement, with a small fraction of events shallower, near the sediment-basement interface. Earthquakes occur within a zone of ~200 meters thickness on either side of the best-fitting fault surface. We use an equivalency class algorithm to identity clusters of repeating events, defined as event pairs with median three-component correlation > 0.97 across common stations (Aster & Scott, 1993). Repeating events occur as doublets of only two events in over 50% of cases; overall, 41% of earthquakes recorded occur as repeating events. The recurrence intervals for the repeating events range from minutes to days, with common recurrence intervals of less than two minutes. While clusters occur in tight dimensions, commonly of 80 m x 200 m, aftershocks occur in 3 distinct ~2km x 2km-sized patches along the fault. Our analysis suggests that with rapidly deployed local arrays, the plethora of ~Mw 4 earthquakes occurring in Oklahoma and Southern Kansas can be used to investigate the earthquake rupture process and the role of damage zones.

  6. A classifying method analysis on the number of returns for given pulse of post-earthquake airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang

    2016-11-01

    Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.

  7. Implications of ground water chemistry and flow patterns for earthquake studies.

    PubMed

    Guangcai, Wang; Zuochen, Zhang; Min, Wang; Cravotta, Charles A; Chenglong, Liu

    2005-01-01

    Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for delta18O, deltaD, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57 degrees C to 160 degrees C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas.

  8. Implications of ground water chemistry and flow patterns for earthquake studies

    USGS Publications Warehouse

    Guangcai, W.; Zuochen, Z.; Min, W.; Cravotta, C.A.; Chenglong, L.

    2005-01-01

    Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for ??18O, ??D, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57??C to 160??C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas. Copyright ?? 2005 National Ground Water Association.

  9. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  10. [Comment on Earthquake precursors: Banished forever?] Comment: Unpredictability of earthquakes-Truth or fiction?

    NASA Astrophysics Data System (ADS)

    Lomnitz, Cinna

    I was delighted to read Alexander Gusev's opinions on what he calls the “unpredictability paradigm” of earthquakes (Eos, February 10, 1998, p. 71). I always enjoy hearing from a good friend in the pages of Eos. I immediately looked up “paradigm” in my Oxford Dictionary and found this: paradigm n 1) set of all the different forms of a word: verb paradigms. 2) Type of something; pattern; model: a paradigm for others to copy.I wonder whether Sasha Gusev actually believes that branding earthquake prediction a “proven nonscience” [Geller, 1997] is a paradigm for others to copy. As for me, I choose to refrain from climbing on board this particular bandwagon for the following reasons.

  11. Do weak global stresses synchronize earthquakes?

    NASA Astrophysics Data System (ADS)

    Bendick, R.; Bilham, R.

    2017-08-01

    Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.

  12. A hypothesis for delayed dynamic earthquake triggering

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    It's uncertain whether more near-field earthquakes are triggered by static or dynamic stress changes. This ratio matters because static earthquake interactions are increasingly incorporated into probabilistic forecasts. Recent studies were unable to demonstrate all predictions from the static-stress-change hypothesis, particularly seismicity rate reductions. However, current dynamic stress change hypotheses do not explain delayed earthquake triggering and Omori's law. Here I show numerically that if seismic waves can alter some frictional contacts in neighboring fault zones, then dynamic triggering might cause delayed triggering and an Omori-law response. The hypothesis depends on faults following a rate/state friction law, and on seismic waves changing the mean critical slip distance (Dc) at nucleation zones.

  13. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and

  14. Remote monitoring of the earthquake cycle using satellite radar interferometry.

    PubMed

    Wright, Tim J

    2002-12-15

    The earthquake cycle is poorly understood. Earthquakes continue to occur on previously unrecognized faults. Earthquake prediction seems impossible. These remain the facts despite nearly 100 years of intensive study since the earthquake cycle was first conceptualized. Using data acquired from satellites in orbit 800 km above the Earth, a new technique, radar interferometry (InSAR), has the potential to solve these problems. For the first time, detailed maps of the warping of the Earth's surface during the earthquake cycle can be obtained with a spatial resolution of a few tens of metres and a precision of a few millimetres. InSAR does not need equipment on the ground or expensive field campaigns, so it can gather crucial data on earthquakes and the seismic cycle from some of the remotest areas of the planet. In this article, I review some of the remarkable observations of the earthquake cycle already made using radar interferometry and speculate on breakthroughs that are tantalizingly close.

  15. Risk Communication on Earthquake Prediction Studies -"No L'Aquila quake risk" experts probed in Italy in June 2010

    NASA Astrophysics Data System (ADS)

    Oki, S.; Koketsu, K.; Kuwabara, E.; Tomari, J.

    2010-12-01

    For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 earthquake on 30th March, the government held Major Risks Committee which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. At the press conference immediately after the committee, they reported that "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable." 6 days later, a magunitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors opened the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee the previous week. This issue becomes widely known to the seismological society especially after an email titled "Letter of Support for Italian Earthquake Scientists" from seismologists at the National Geophysics and Volcanology Institute (INGV) sent worldwide. It says that the L'Aquila Prosecutors office indicted of manslaughter the members of the Major Risks Committee and that the charges are for failing to provide a short term alarm to the population before the earthquake struck. It is true that there is no generalized method to predict earthquakes but failing the short term alarm is not the reason for the investigation of the scientists. The chief prosecutor stated that "the committee could have provided the people with better advice", and "it wasn't the case that they did not receive any warnings, because there had been tremors". The email also requests sign-on support for the open letter to the president of Italy from Earth sciences colleagues from all over the world and collected more than 5000 signatures

  16. Insights into earthquake hazard map performance from shaking history simulations

    NASA Astrophysics Data System (ADS)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher

  17. Application and analysis of debris-flow early warning system in Wenchuan earthquake-affected area

    NASA Astrophysics Data System (ADS)

    Liu, D. L.; Zhang, S. J.; Yang, H. J.; Zhao, L. Q.; Jiang, Y. H.; Tang, D.; Leng, X. P.

    2016-02-01

    The activities of debris flow (DF) in the Wenchuan earthquake-affected area significantly increased after the earthquake on 12 May 2008. The safety of the lives and property of local people is threatened by DFs. A physics-based early warning system (EWS) for DF forecasting was developed and applied in this earthquake area. This paper introduces an application of the system in the Wenchuan earthquake-affected area and analyzes the prediction results via a comparison to the DF events triggered by the strong rainfall events reported by the local government. The prediction accuracy and efficiency was first compared with a contribution-factor-based system currently used by the weather bureau of Sichuan province. The storm on 17 August 2012 was used as a case study for this comparison. The comparison shows that the false negative rate and false positive rate of the new system is, respectively, 19 and 21 % lower than the system based on the contribution factors. Consequently, the prediction accuracy is obviously higher than the system based on the contribution factors with a higher operational efficiency. On the invitation of the weather bureau of Sichuan province, the authors upgraded their prediction system of DF by using this new system before the monsoon of Wenchuan earthquake-affected area in 2013. Two prediction cases on 9 July 2013 and 10 July 2014 were chosen to further demonstrate that the new EWS has high stability, efficiency, and prediction accuracy.

  18. Improved nonlinear prediction method

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  19. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  20. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  1. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  2. Numerical Modeling and Forecasting of Strong Sumatra Earthquakes

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Yin, C.

    2007-12-01

    ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.

  3. Development of a Low Cost Earthquake Early Warning System in Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Y. M.

    2017-12-01

    The National Taiwan University (NTU) developed an earthquake early warning (EEW) system for research purposes using low-cost accelerometers (P-Alert) since 2010. As of 2017, a total of 650 stations have been deployed and configured. The NTU system can provide earthquake information within 15 s of an earthquake occurrence. Thus, this system may provide early warnings for cities located more than 50 km from the epicenter. Additionally, the NTU system also has an onsite alert function that triggers a warning for incoming P-waves greater than a certain magnitude threshold, thus providing a 2-3 s lead time before peak ground acceleration (PGA) for regions close to an epicenter. Detailed shaking maps are produced by the NTU system within one or two minutes after an earthquake. Recently, a new module named ShakeAlarm has been developed. Equipped with real-time acceleration signals and the time-dependent anisotropic attenuation relationship of the PGA, ShakingAlarm can provide an accurate PGA estimation immediately before the arrival of the observed PGA. This unique advantage produces sufficient lead time for hazard assessment and emergency response, which is unavailable for traditional shakemap, which are based on only the PGA observed in real time. The performance of ShakingAlarm was tested with six M > 5.5 inland earthquakes from 2013 to 2016. Taking the 2016 M6.4 Meinong earthquake simulation as an example, the predicted PGA converges to a stable value and produces a predicted shake map and an isocontour map of the predicted PGA within 16 seconds of earthquake occurrence. Compared with traditional regional EEW system, ShakingAlarm can effectively identify possible damage regions and provide valuable early warning information (magnitude and PGA) for risk mitigation.

  4. Qualitative Investigation of the Earthquake Precuesors Prior to the March 14,2012 Earthquake in Japan

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Shailesh Kumar; Gwal, Ashok Kumar

    Abstract: In this study we have used the Empirical Mode Decomposition (EMD) method in conjunction with the Cross Correlation analysis to analyze ionospheric foF2 parameter Japan earthquake with magnitude M = 6.9. The data are collected from Kokubunji (35.70N, 139.50E) and Yamakawa (31.20N, 130.60E) ionospheric stations. The EMD method was used for removing the geophysical noise from the foF2 data and then to calculate the correlation coefficient between them. It was found that the ionospheric foF2 parameter shows anomalous change few days before the earthquake. The results are in agreement with the theoretical model evidencing ionospheric modification prior to Japan earthquake in a certain area around the epicenter.

  5. Satellite relay telemetry of seismic data in earthquake prediction and control

    USGS Publications Warehouse

    Jackson, Wayne H.; Eaton, Jerry P.

    1971-01-01

    The Satellite Telemetry Earthquake Monitoring Program was started in FY 1968 to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research (NCER) in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project.

  6. Magnitude Dependent Seismic Quiescence of 2008 Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sacks, S. I.; Takanami, T.; Smith, D. E.; Rydelek, P. A.

    2014-12-01

    The change in seismicity leading to the Wenchuan Earthquake in 2008 (Mw 7.9) has been studied by various authors based on statistics and/or pattern recognitions (Huang, 2008; Yan et al., 2009; Chen and Wang, 2010; Yi et al., 2011). We show, in particular, that the magnitude-dependent seismic quiescence is observed for the Wenchuan earthquake and that it adds to other similar observations. Such studies on seismic quiescence prior to major earthquakes include 1982 Urakawa-Oki earthquake (M 7.1) (Taylor et al., 1992), 1994 Hokkaido-Toho-Oki earthquake (Mw=8.2) (Takanami et al., 1996), 2011 Tohoku earthquake (Mw=9.0) (Katsumata, 2011). Smith and Sacks (2013) proposed a magnitude-dependent quiescence based on a physical earthquake model (Rydelek and Sacks, 1995) and demonstrated the quiescence can be reproduced by the introduction of "asperities" (dilantacy hardened zones). Actual observations indicate the change occurs in a broader area than the eventual earthquake fault zone. In order to accept the explanation, we need to verify the model as the model predicts somewhat controversial features of earthquakes such as the magnitude dependent stress drop at lower magnitude range or the dynamically appearing asperities and repeating slips in some parts of the rupture zone. We show supportive observations. We will also need to verify the dilatancy diffusion to be taking place. So far, we only seem to have indirect evidences, which need to be more quantitatively substantiated.

  7. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  8. A Statistical Study of Total Electron Content Changes in the Ionosphere Prior to Earthquake Occurrences

    NASA Astrophysics Data System (ADS)

    Thomas, J. N.; Huard, J.; Masci, F.

    2015-12-01

    There are many published reports of anomalous changes in the ionosphere prior to large earthquakes. However, whether or not these ionospheric changes are reliable precursors that could be useful for earthquake prediction is controversial within the scientific community. To test a possible statistical relationship between the ionosphere and earthquakes, we compare changes in the total electron content (TEC) of the ionosphere with occurrences of M≥6.0 earthquakes globally for a multiyear period. We use TEC data from a global ionosphere map (GIM) and an earthquake list declustered for aftershocks. For each earthquake, we look for anomalous changes in TEC within ±30 days of the earthquake time and within 2.5° latitude and 5.0° longitude of the earthquake location (the spatial resolution of GIM). Our preliminary analysis, using global TEC and earthquake data for 2002-2010, has not found any statistically significant changes in TEC prior to earthquakes. Thus, we have found no evidence that would suggest that TEC changes are useful for earthquake prediction. Our results are discussed in the context of prior statistical and case studies. Namely, our results agree with Dautermann et al. (2007) who found no relationship between TEC changes and earthquakes in the San Andreas fault region. Whereas, our results disagree with Le et al. (2011) who found an increased rate in TEC anomalies within a few days before global earthquakes M≥6.0.

  9. Development of direct multi-hazard susceptibility assessment method for post-earthquake reconstruction planning in Nepal

    NASA Astrophysics Data System (ADS)

    Mavrouli, Olga; Rana, Sohel; van Westen, Cees; Zhang, Jianqiang

    2017-04-01

    After the devastating 2015 Gorkha earthquake in Nepal, reconstruction activities have been delayed considerably, due to many reasons, of a political, organizational and technical nature. Due to the widespread occurrence of co-seismic landslides, and the expectation that these may be aggravated or re-activated in future years during the intense monsoon periods, there is a need to evaluate for thousands of sites whether these are suited for reconstruction. In this evaluation multi-hazards, such as rockfall, landslides, debris flow, and flashfloods should be taken into account. The application of indirect knowledge-based, data-driven or physically-based approaches is not suitable due to several reasons. Physically-based models generally require a large number of parameters, for which data is not available. Data-driven, statistical methods, depend on historical information, which is less useful after the occurrence of a major event, such as an earthquake. Besides, they would lead to unacceptable levels of generalization, as the analysis is done based on rather general causal factor maps. The same holds for indirect knowledge-driven methods. However, location-specific hazards analysis is required using a simple method that can be used by many people at the local level. In this research, a direct scientific method was developed where local level technical people can easily and quickly assess the post-earthquake multi hazards following a decision tree approach, using an app on a smartphone or tablet. The methods assumes that a central organization, such as the Department of Soil Conservation and Watershed Management, generates spatial information beforehand that is used in the direct assessment at a certain location. Pre-earthquake, co-seismic and post-seismic landslide inventories are generated through the interpretation of Google Earth multi-temporal images, using anaglyph methods. Spatial data, such as Digital Elevation Models, land cover maps, and geological maps are

  10. The Determination Method of Extreme Earthquake Disaster Area Based on the Dust Detection Result from GF-4 Data

    NASA Astrophysics Data System (ADS)

    Dou, A.; Ding, L.; Chen, M.; Wang, X.

    2018-04-01

    The remote sensing has played an important role in many earthquake emergencies by rapidly providing the building damage, road damage, landslide and other disaster information. The earthquake in the mountains often caused to the loosening of the mountains and the blowing of the dust in the epicentre area. The dust particles are more serious in the epicentre area than the other disaster area. Basis on the analysis of abnormal spectrum characteristics, the dust detection methods from medium and high resolutions satellite imagery are studied in order to determinate the extreme earthquake disaster area. The results indicate the distribution of extreme disaster can be acquired using the dust detection information from imagery, which can provide great help for disaster intensity assessment.

  11. Receiver Operating Characteristic curves of the seismo-ionospheric precursors in GIM TEC associated with magnitude greater than 6.0 earthquakes in China during 1998-2013.

    NASA Astrophysics Data System (ADS)

    Huang, C. H.; Chen, Y. I.; Liu, J. Y. G.; Huang, Y. H.

    2014-12-01

    Statistical evidence of the Seismo-Ionospheric Precursors (SIPs) is reported by statistically investigating the relationship between the Total Electron Content (TEC) in Global Ionosphere Map (GIM) and 56 M≥6.0 earthquakes during 1998-2013 in China. A median-based method and a z test are employed to detect the overall earthquake signatures. It is found that a reduction of positive signatures and an enhancement of negative signatures appear simultaneously on 3-5 days prior to the earthquakes in China. Finally, receiver operating characteristic (ROC) curves are used to measure the power of TEC for predicting M≥6.0 earthquakes in China.

  12. Tokyo Metropolitan Earthquake Preparedness Project - A Progress Report

    NASA Astrophysics Data System (ADS)

    Hayashi, H.

    2010-12-01

    Munich Re once ranked that Tokyo metropolitan region, the capital of Japan, is the most vulnerable area for earthquake disasters, followed by San Francisco Bay Area, US and Osaka, Japan. Seismologists also predict that Tokyo metropolitan region may have at least one near-field earthquake with a probability of 70% for the next 30 years. Given this prediction, Japanese Government took it seriously to conduct damage estimations and revealed that, as the worst case scenario, if a7.3 magnitude earthquake under heavy winds as shown in the fig. 1, it would kill a total of 11,000 people and a total of direct and indirect losses would amount to 112,000,000,000,000 yen(1,300,000,000,000, 1=85yen) . In addition to mortality and financial losses, a total of 25 million people would be severely impacted by this earthquake in four prefectures. If this earthquake occurs, 300,000 elevators will be stopped suddenly, and 12,500 persons would be confined in them for a long time. Seven million people will come to use over 20,000 public shelters spread over the impacted area. Over one millions temporary housing units should be built to accommodate 4.6 million people who lost their dwellings. 2.5 million people will relocate to outside of the damaged area. In short, an unprecedented scale of earthquake disaster is expected and we must prepare for it. Even though disaster mitigation is undoubtedly the best solution, it is more realistic that the expected earthquake would hit before we complete this business. In other words, we must take into account another solution to make the people and the assets in this region more resilient for the Tokyo metropolitan earthquake. This is the question we have been tackling with for the last four years. To increase societal resilience for Tokyo metropolitan earthquake, we adopted a holistic approach to integrate both emergency response and long-term recovery. There are three goals for long-term recovery, which consists of Physical recovery, Economic

  13. Retrospective stress-forecasting of earthquakes

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Crampin, Stuart

    2015-04-01

    Observations of changes in azimuthally varying shear-wave splitting (SWS) above swarms of small earthquakes monitor stress-induced changes to the stress-aligned vertical microcracks pervading the upper crust, lower crust, and uppermost ~400km of the mantle. (The microcracks are intergranular films of hydrolysed melt in the mantle.) Earthquakes release stress, and an appropriate amount of stress for the relevant magnitude must accumulate before each event. Iceland is on an extension of the Mid-Atlantic Ridge, where two transform zones, uniquely run onshore. These onshore transform zones provide semi-continuous swarms of small earthquakes, which are the only place worldwide where SWS can be routinely monitored. Elsewhere SWS must be monitored above temporally-active occasional swarms of small earthquakes, or in infrequent SKS and other teleseismic reflections from the mantle. Observations of changes in SWS time-delays are attributed to stress-induced changes in crack aspect-ratios allowing stress-accumulation and stress-relaxation to be identified. Monitoring SWS in SW Iceland in 1988, stress-accumulation before an impending earthquake was recognised and emails were exchanged between the University of Edinburgh (EU) and the Iceland Meteorological Office (IMO). On 10th November 1988, EU emailed IMO that a M5 earthquake could occur soon on a seismically-active fault plane where seismicity was still continuing following a M5.1 earthquake six-months earlier. Three-days later, IMO emailed EU that a M5 earthquake had just occurred on the specified fault-plane. We suggest this is a successful earthquake stress-forecast, where we refer to the procedure as stress-forecasting earthquakes as opposed to predicting or forecasting to emphasise the different formalism. Lack of funds has prevented us monitoring SWS on Iceland seismograms, however, we have identified similar characteristic behaviour of SWS time-delays above swarms of small earthquakes which have enabled us to

  14. Regional and Local Glacial-Earthquake Patterns in Greenland

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2016-12-01

    Icebergs calved from marine-terminating glaciers currently account for up to half of the 400 Gt of ice lost annually from the Greenland ice sheet (Enderlin et al., 2014). When large capsizing icebergs ( 1 Gt of ice) calve, they produce elastic waves that propagate through the solid earth and are observed as teleseismically detectable MSW 5 glacial earthquakes (e.g., Ekström et al., 2003; Nettles & Ekström, 2010 Tsai & Ekström, 2007; Veitch & Nettles, 2012). The annual number of these events has increased dramatically over the past two decades. We analyze glacial earthquakes from 2011-2013, which expands the glacial-earthquake catalog by 50%. The number of glacial-earthquake solutions now available allows us to investigate regional patterns across Greenland and link earthquake characteristics to changes in ice dynamics at individual glaciers. During the years of our study Greenland's west coast dominated glacial-earthquake production. Kong Oscar Glacier, Upernavik Isstrøm, and Jakobshavn Isbræ all produced more glacial earthquakes during this time than in preceding years. We link patterns in glacial-earthquake production and cessation to the presence or absence of floating ice tongues at glaciers on both coasts of Greenland. The calving model predicts glacial-earthquake force azimuths oriented perpendicular to the calving front, and comparisons between seismic data and satellite imagery confirm this in most instances. At two glaciers we document force azimuths that have recently changed orientation and confirm that similar changes have occurred in the calving-front geometry. We also document glacial earthquakes at one previously quiescent glacier. Consistent with previous work, we model the glacial-earthquake force-time function as a boxcar with horizontal and vertical force components that vary synchronously. We investigate limitations of this approach and explore improvements that could lead to a more accurate representation of the glacial earthquake source.

  15. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  16. Transient triggering of near and distant earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Blanpied, M.L.; Beeler, N.M.

    1997-01-01

    We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i

  17. Toward a Better Nutritional Aiding in Disasters: Relying on Lessons Learned during the Bam Earthquake.

    PubMed

    Nekouie Moghadam, Mahmoud; Amiresmaieli, Mohammadreza; Hassibi, Mohammad; Doostan, Farideh; Khosravi, Sajad

    2017-08-01

    Introduction Examining various problems in the aftermath of disasters is very important to the disaster victims. Managing and coordinating food supply and its distribution among the victims is one of the most important problems after an earthquake. Therefore, the purpose of this study was to recognize problems and experiences in the field of nutritional aiding during an earthquake. This qualitative study was of phenomenological type. Using the purposive sampling method, 10 people who had experienced nutritional aiding during the Bam Earthquake (Iran; 2003) were interviewed. Colaizzi's method of analysis was used to analyze interview data. The findings of this study identified four main categories and 19 sub-categories concerning challenges in the nutritional aiding during the Bam Earthquake. The main topics included managerial, aiding, infrastructural, and administrative problems. The major problems in nutritional aiding include lack of prediction and development of a specific program of suitable nutritional pattern and nutritional assessment of the victims in critical conditions. Forming specialized teams, educating team members about nutrition, and making use of experts' knowledge are the most important steps to resolve these problems in the critical conditions; these measures are the duties of the relevant authorities. Nekouie Moghadam M , Amiresmaieli M , Hassibi M , Doostan F , Khosravi S . Toward a better nutritional aiding in disasters: relying on lessons learned during the Bam Earthquake. Prehosp Disaster Med. 2017;32(4):382-386.

  18. Postseismic deformation and stress changes following the 1819 Rann of Kachchh, India earthquake: Was the 2001 Bhuj earthquake a triggered event?

    USGS Publications Warehouse

    To, A.; Burgmann, R.; Pollitz, F.

    2004-01-01

    The 2001 Mw 7.6 Bhuj earthquake occurred in an intraplate region with rather unusual active seismicity, including an earlier major earthquake, the 1819 Rann of Kachchh earthquake (M7.7). We examine if static coseismic and transient postseismic deformation following the 1819 earthquake contributed to the enhanced seismicity in the region and the occurrence of the 2001 Bhuj earthquake, ???100 km away and almost two centuries later. Based on the Indian shield setting, great rupture depth of the 2001 event and lack of significant early postseismic deformation measured following the 2001 event, we infer that little viscous relaxation occurs in the lower crust and choose an upper mantle effective viscosity of 1019 Pas. The predicted Coulomb failure stress (DCFS) on the rupture plane of the 2001 event increased by more than 0.1 bar at 20 km depth, which is a small but possibly significant amount. Stress change from the 1819 event may have also affected the occurrence of other historic earthquakes in this region. We also evaluate the postseismic deformation and ??CFS in this region due to the 2001 event. Positive ??CFS from the 2001 event occur to the NW and SE of the Bhuj earthquake rupture. Copyright 2004 by the American Geophysical Union.

  19. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    . The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.

  20. e-Science on Earthquake Disaster Mitigation by EUAsiaGrid

    NASA Astrophysics Data System (ADS)

    Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong

    2010-05-01

    Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the

  1. Monitoring of ULF (ultra-low-frequency) Geomagnetic Variations Associated with Earthquakes

    PubMed Central

    Hayakawa, Masashi; Hattori, Katsumi; Ohta, Kenji

    2007-01-01

    ULF (ultra-low-frequency) electromagnetic emission is recently recognized as one of the most promising candidates for short-term earthquake prediction. This paper reviews previous convincing evidence on the presence of ULF emissions before a few large earthquakes. Then, we present our network of ULF monitoring in the Tokyo area by describing our ULF magnetic sensors and we finally present a few, latest results on seismogenic electromagnetic emissions for recent large earthquakes with the use of sophisticated signal processings.

  2. Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.

    2015-12-01

    Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada

  3. Research on response spectrum of dam based on scenario earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoliang; Zhang, Yushan

    2017-10-01

    Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.

  4. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  5. Prompt identification of tsunamigenic earthquakes from 3-component seismic data

    NASA Astrophysics Data System (ADS)

    Kundu, Ajit; Bhadauria, Y. S.; Basu, S.; Mukhopadhyay, S.

    2016-10-01

    An Artificial Neural Network (ANN) based algorithm for prompt identification of shallow focus (depth < 70 km) tsunamigenic earthquakes at a regional distance is proposed in the paper. The promptness here refers to decision making as fast as 5 min after the arrival of LR phase in the seismogram. The root mean square amplitudes of seismic phases recorded by a single 3-component station have been considered as inputs besides location and magnitude. The trained ANN has been found to categorize 100% of the new earthquakes successfully as tsunamigenic or non-tsunamigenic. The proposed method has been corroborated by an alternate mapping technique of earthquake category estimation. The second method involves computation of focal parameters, estimation of water volume displaced at the source and eventually deciding category of the earthquake. The method has been found to identify 95% of the new earthquakes successfully. Both the methods have been tested using three component broad band seismic data recorded at PALK (Pallekele, Sri Lanka) station provided by IRIS for earthquakes originating from Sumatra region of magnitude 6 and above. The fair agreement between the methods ensures that a prompt alert system could be developed based on proposed method. The method would prove to be extremely useful for the regions that are not adequately instrumented for azimuthal coverage.

  6. Nurse willingness to report for work in the event of an earthquake in Israel.

    PubMed

    Ben Natan, Merav; Nigel, Simon; Yevdayev, Innush; Qadan, Mohamad; Dudkiewicz, Mickey

    2014-10-01

    To examine variables affecting nurse willingness to report for work in the event of an earthquake in Israel and whether this can be predicted through the Theory of Self-Efficacy. The nursing profession has a major role in preparing for earthquakes. Nurse willingness to report to work in the event of an earthquake has never before been examined. Self-administered questionnaires were distributed among a convenience sample of 400 nurses and nursing students in Israel during January-April 2012. High willingness to report to work in the event of an earthquake was declared by 57% of respondents. High perceived self-efficacy, level of knowledge and experience predict willingness to report to work in the event of an earthquake. Multidisciplinary collaboration and support was also cited as a meaningful factor. Perceived self-efficacy, level of knowledge, experience and the support of a multidisciplinary staff affect nurse willingness to report to work in the event of an earthquake. Nurse managers can identify factors that increase nurse willingness to report to work in the event of an earthquake and consequently develop strategies for more efficient management of their nursing workforce. © 2013 John Wiley & Sons Ltd.

  7. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  8. Assessment of precast beam-column using capacity demand response spectrum subject to design basis earthquake and maximum considered earthquake

    NASA Astrophysics Data System (ADS)

    Ghani, Kay Dora Abd.; Tukiar, Mohd Azuan; Hamid, Nor Hayati Abdul

    2017-08-01

    Malaysia is surrounded by the tectonic feature of the Sumatera area which consists of two seismically active inter-plate boundaries, namely the Indo-Australian and the Eurasian Plates on the west and the Philippine Plates on the east. Hence, Malaysia experiences tremors from far distant earthquake occurring in Banda Aceh, Nias Island, Padang and other parts of Sumatera Indonesia. In order to predict the safety of precast buildings in Malaysia under near field ground motion the response spectrum analysis could be used for dealing with future earthquake whose specific nature is unknown. This paper aimed to develop of capacity demand response spectrum subject to Design Basis Earthquake (DBE) and Maximum Considered Earthquake (MCE) in order to assess the performance of precast beam column joint. From the capacity-demand response spectrum analysis, it can be concluded that the precast beam-column joints would not survive when subjected to earthquake excitation with surface-wave magnitude, Mw, of more than 5.5 Scale Richter (Type 1 spectra). This means that the beam-column joint which was designed using the current code of practice (BS8110) would be severely damaged when subjected to high earthquake excitation. The capacity-demand response spectrum analysis also shows that the precast beam-column joints in the prototype studied would be severely damaged when subjected to Maximum Considered Earthquake (MCE) with PGA=0.22g having a surface-wave magnitude of more than 5.5 Scale Richter, or Type 1 spectra.

  9. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of

  10. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  11. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth

  12. Prediction monitoring and evaluation program; a progress report

    USGS Publications Warehouse

    Hunter, R.N.; Derr, J.S.

    1978-01-01

    As part of an attempt to separate useful predictions from inaccurate guesses, we have kept score on earthquake predictions from all sources brought to our attention over the past year and a half. The program was outlined in "Earthquake Prediction;Fact and Fallacy" by Roger N. Hunter (Earthquake Information Bulletin, vol. 8, no. 5, September-October 1976, p. 24-25). The program attracted a great deal of public attention, and, as a result, our files now contain over 2500 predictions from more than 230 different people. 

  13. Probabilistic liquefaction hazard analysis at liquefied sites of 1956 Dunaharaszti earthquake, in Hungary

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor

    2017-04-01

    Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located

  14. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    NASA Astrophysics Data System (ADS)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  15. A comparison of earthquake backprojection imaging methods for dense local arrays

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we

  16. Predictors of psychological resilience amongst medical students following major earthquakes.

    PubMed

    Carter, Frances; Bell, Caroline; Ali, Anthony; McKenzie, Janice; Boden, Joseph M; Wilkinson, Timothy; Bell, Caroline

    2016-05-06

    To identify predictors of self-reported psychological resilience amongst medical students following major earthquakes in Canterbury in 2010 and 2011. Two hundred and fifty-three medical students from the Christchurch campus, University of Otago, were invited to participate in an electronic survey seven months following the most severe earthquake. Students completed the Connor-Davidson Resilience Scale, the Depression, Anxiety and Stress Scale, the Post-traumatic Disorder Checklist, the Work and Adjustment Scale, and the Eysenck Personality Questionnaire. Likert scales and other questions were also used to assess a range of variables including demographic and historical variables (eg, self-rated resilience prior to the earthquakes), plus the impacts of the earthquakes. The response rate was 78%. Univariate analyses identified multiple variables that were significantly associated with higher resilience. Multiple linear regression analyses produced a fitted model that was able to explain 35% of the variance in resilience scores. The best predictors of higher resilience were: retrospectively-rated personality prior to the earthquakes (higher extroversion and lower neuroticism); higher self-rated resilience prior to the earthquakes; not being exposed to the most severe earthquake; and less psychological distress following the earthquakes. Psychological resilience amongst medical students following major earthquakes was able to be predicted to a moderate extent.

  17. Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    An, L.; Zhang, J.; Gong, L.

    2018-04-01

    Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.

  18. Gas and Dust Phenomena of Mega-earthquakes and the Cause

    NASA Astrophysics Data System (ADS)

    Yue, Z.

    2013-12-01

    A mega-earthquake suddenly releases a large to extremely large amount of kinetic energy within a few tens to two hundreds seconds and over ten to hundreds kilometer distances in the Earth's crust and on ground surface. It also generates seismic waves that can be received globally and co-seismic ground damages such co-seismic ruptures and landslides. However, such vast, dramatic and devastating kinetic actions in the Earth's crustal rocks and on the ground soils cannot be known or predicted by people at few weeks, days, hours, or minutes before they are happening. Although seismologists can develop and use seismometers to report the locations and magnitudes of earthquakes within minutes of their occurrence, they cannot predict earthquakes at present. Therefore, damage earthquakes have caused and would continue to cause huge disasters, fatalities and injuries to our human beings. This problem may indicate that it is necessary to re-examine the cause of mega-earthquakes in addition to the conventional cause of active fault elastic rebounding. In the last ten years, many mega-earthquakes occurred in China and around the Pacific Ocean and caused many casualties to human beings and devastating disasters to environments. The author will give a brief review on the impacts of the mega-earthquakes happened in recent years. He will then present many gas and dust related phenomena associated with the sudden occurrences of these mega earthquakes. They include the 2001 Kunlunshan Earthquake M8.1, 2008 Wenchuan Earthquake M8.0 and the 2010 Yushu Earthquake M7.1 in China, the 2010 Haiti Earthquake M7.0, the 2010 Mexicali Earthquake M7.2, the 2010 Chile Earthquake M8.8, the 2011 Christchurch earthquake M6.3 and the 2011 Japan Earthquake M9.0 around the Pacific Ocean. He will discuss the cause of these gas and dust related phenomena. He will use these phenomena and their common cause to show that the earthquakes were caused the rapid migration and expansion of highly compressed and

  19. Seismic, structural, and individual factors associated with earthquake related injury

    PubMed Central

    Peek-Asa, C; Ramirez, M; Seligson, H; Shoaf, K

    2003-01-01

    Background: Earthquakes cause thousands of deaths worldwide every year, and systematic study of the causes of these deaths can lead to their prevention. Few studies have examined how multiple types of risk factors are related to physical injury during an earthquake. Methods: A population based case-control study was conducted to examine how individual characteristics, building characteristics, and seismic features of the 1994 Northridge, California, earthquake contributed to physical injury. Cases included fatal and hospital-admitted injuries caused by the earthquake. Controls were drawn from a population based phone survey of county residents. Cases were individually matched to two sets of controls: one matched by age and gender and one matched by location at the time of the earthquake. Results: Individuals over age 65 had 2.9 times the risk of injury as younger people (95% confidence interval (CI) 1.2 to 7.4) and women had a 2.4 times greater risk than men (95% CI 1.2 to 5.1). Location in multiple unit residential and commercial structures each led to increased injury risk compared with single unit residential structures, but the exact estimate varied depending on the control group used. With every increase in ground motion of 10%g, injury risk increased 2.2 times (95% CI 1.6 to 3.3). Conclusions: Controlling for other factors, it was found that individual, building, and seismic characteristics were independently predictive of increased injury risk. Prevention and preparedness efforts should focus on each of these as potential points of intervention. PMID:12642562

  20. Shaking intensity from injection-induced versus tectonic earthquakes in the central-eastern United States

    USGS Publications Warehouse

    Hough, Susan E.

    2015-01-01

    Although instrumental recordings of earthquakes in the central and eastern United States (CEUS) remain sparse, the U. S. Geological Survey's “Did you feel it?” (DYFI) system now provides excellent characterization of shaking intensities caused by induced and tectonic earthquakes. Seventeen CEUS events are considered between 2013 and 2015. It is shown that for 15 events, observed intensities at epicentral distances greater than ≈ 10 km are lower than expected given a published intensity-prediction equation for the region. Using simple published relations among intensity, magnitude, and stress drop, the results suggest that 15 of the 17 events have low stress drop. For those 15 events, intensities within ≈ 10-km epicentral distance are closer to predicted values, which can be explained as a consequence of relatively shallow source depths. The results suggest that those 15 events, most of which occurred in areas where induced earthquakes have occurred previously, were likely induced. Although moderate injection-induced earthquakes in the central and eastern United States will be felt widely because of low regional attenuation, the damage from shallow earthquakes induced by injection will be more localized to event epicenters than shaking tectonic earthquakes, which tend to be somewhat deeper. Within approximately 10 km of the epicenter, intensities are generally commensurate with predicted levels expected for the event magnitude.

  1. Earthquake ground motion simulation at Zoser pyramid using the stochastic method: A step toward the preservation of an ancient Egyptian heritage

    NASA Astrophysics Data System (ADS)

    Khalil, Amin E.; Abdel Hafiez, H. E.; Girgis, Milad; Taha, M. A.

    2017-06-01

    Strong ground shaking during earthquakes can greatly affect the ancient monuments and subsequently demolish the human heritage. On October 12th 1992, a moderate earthquake (Ms = 5.8) shocked the greater Cairo area causing widespread damages. Unfortunately, the focus of that earthquake is located about 14 km to the south of Zoser pyramid. After the earthquake, the Egyptian Supreme council of antiquities issued an alarm that Zoser pyramid is partially collapsed and international and national efforts are exerted to restore this important human heritage that was built about 4000 years ago. Engineering and geophysical work is thus needed for the restoration process. The definition of the strong motion parameters is one of the required studies since seismically active zone is recorded in its near vicinity. The present study adopted the stochastic method to determine the peak ground motion (acceleration, velocity and displacement) for the three largest earthquakes recorded in the Egypt's seismological history. These earthquakes are Shedwan earthquake with magnitude Ms = 6.9, Aqaba earthquake with magnitude Mw = 7.2 and Cairo (Dahshour earthquake) with magnitude Ms = 5.8. The former two major earthquakes took place few hundred kilometers away. It is logic to have the predominant effects from the epicentral location of the Cairo earthquake; however, the authors wanted to test also the long period effects of the large distance earthquakes expected from the other two earthquakes under consideration. In addition, the dynamic site response was studied using the Horizontal to vertical spectral ratio (HVSR) technique. HVSR can provide information about the fundamental frequency successfully; however, the amplification estimation is not accepted. The result represented as either peak ground motion parameters or response spectra indicates that the effects from Cairo earthquake epicenter are the largest for all periods considered in the present study. The level of strong motion as

  2. Gravitational body forces focus North American intraplate earthquakes

    USGS Publications Warehouse

    Levandowski, William Brower; Zellman, Mark; Briggs, Richard

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard.

  3. Gravitational body forces focus North American intraplate earthquakes

    PubMed Central

    Levandowski, Will; Zellman, Mark; Briggs, Rich

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard. PMID:28211459

  4. Geoethical suggestions for reducing risk of next (not only strong) earthquakes

    NASA Astrophysics Data System (ADS)

    Nemec, Vaclav

    2013-04-01

    Three relatively recent examples of earthquakes can be used as a background for suggesting geoethical views into any prediction accompanied by a risk analysis. ĹAquila earthquake (Italy - 2009): ĹAquila was largely destroyed by earthquakes in 1315, 1319, 1452, 1461, 1501, 1646, 1703 (until that time altogether about 3000 victims) and 1786 (about 6000 victims of this event only). The city was rebuilt and remained stable until October 2008, when tremors began again. From January 1 through April 5, 2009, additional 304 tremors were reported. When after measuring increased levels of radon emitted from the ground a local citizen (for many years working for the Italian National Institute of Astrophysics) predicted a major earthquake on Italian television, he was accused of being alarmist. Italy's National Commission for Prediction and Prevention of Major Risks met in L'Aquila for one hour on March 31, 2009, without really evaluating and characterising the risks that were present. On April 6 a 6.3 magnitude earthquake struck Aquila and nearby towns, killing 309 people and injuring more than 1,500. The quake also destroyed roughly 20,000 buildings, temporarily displacing another 65,000 people. In July 2010, prosecutor Fabio Picuti charged the Commission members with manslaughter and negligence for failing to warn the public of the impending risk. Many international organizations joined the chorus of criticism wrongly interpreting the accusation and sentence at the first stage as a problem of impossibility to predict earthquakes. - The Eyjafjallajokull volcano eruption (Iceland - 2010) is a reminder that in our globalized, interconnected world because of the increased sensibility of the new technology even a relatively small natural disaster may cause unexpected range of problems. - Earthquake and tsunami (Japan - 2011) - the most powerful known earthquake ever to have hit Japan on March 11. Whereas the proper earthquake with the magnitude of 9.0 has caused minimum of

  5. Strong ground motion of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Kubo, H.; Morikawa, N.; Fujiwara, H.

    2016-12-01

    The 2016 Kumamoto earthquake that is composed of Mw 6.1 and Mw 7.1 earthquakes respectively occurred in the Kumamoto region at 21:26 on April 14 and 28 hours later at 1:25 on April 16, 2016 (JST). These earthquakes are considered to rupture mainly the Hinagu fault zone for the Mw 6.1 event and the Futagawa fault zone for the Mw 7.1 event, respectively, where the Headquarter for Earthquake Research Promotion performed the long-term evaluation as well as seismic hazard assessment prior to the 2016 Kumamoto earthquake. Strong shakings with seismic intensity 7 in the JMA scale were observed at four times in total: Mashiki town for the Mw 6.1 and Mw 7.1 events, Nishihara village for the Mw 7.1 event, and NIED/KiK-net Mashiki (KMMH16) for the Mw 7.1 event. KiK-net Mashiki (KMMH16) recorded peak ground acceleration more than 1000 cm/s/s, and Nishihara village recorded peak ground velocity more than 250 cm/s. Ground motions were observed wider area for the Mw 7.1 event than the Mw 6.1 event. Peak ground accelerations and peak ground velocities of K-NET/KiK-net stations are consistent with the ground motion prediction equations by Si and Midorikawa (1999). Peak ground velocities at longer distance than 200 km attenuate slowly, which can be attributed to the large Love wave with a dominant period around 10 seconds. 5%-damped pseudo spectral velocity of the Mashiki town shows a peak at period of 1-2 s that exceeds ground motion response of JR Takatori of the 1995 Kobe earthquake and the Kawaguchi town of the 2004 Chuetsu earthquake. 5%-damped pseudo spectral velocity of the Nishihara village shows 350 cm/s peak at period of 3-4 s that is similar to the several stations in Kathmandu basin by Takai et al. (2016) during the 2015 Gorkha earthquake in Nepal. Ground motions at several stations in Oita exceed the ground motion prediction equations due to an earthquake induced by the Mw 7.1 event. Peak ground accelerations of K-NET Yufuin (OIT009) records 90 cm/s/s for the Mw 7

  6. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  7. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    NASA Astrophysics Data System (ADS)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  8. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  9. Retrospective Cohort Analysis of Chest Injury Characteristics and Concurrent Injuries in Patients Admitted to Hospital in the Wenchuan and Lushan Earthquakes in Sichuan, China

    PubMed Central

    Yuan, Yong; Zhao, Yong-Fan

    2014-01-01

    Background The aim of this study was to compare retrospectively the characteristics of chest injuries and frequencies of other, concurrent injuries in patients after earthquakes of different seismic intensity. Methods We compared the cause, type, and body location of chest injuries as well as the frequencies of other, concurrent injuries in patients admitted to our hospital after the Wenchuan and Lushan earthquakes in Sichuan, China. We explored possible relationships between seismic intensity and the causes and types of injuries, and we assessed the ability of the Injury Severity Score, New Injury Severity Score, and Chest Injury Index to predict respiratory failure in chest injury patients. Results The incidence of chest injuries was 9.9% in the stronger Wenchuan earthquake and 22.2% in the less intensive Lushan earthquake. The most frequent cause of chest injuries in both earthquakes was being accidentally struck. Injuries due to falls were less prevalent in the stronger Wenchuan earthquake, while injuries due to burial were more prevalent. The distribution of types of chest injury did not vary significantly between the two earthquakes, with rib fractures and pulmonary contusions the most frequent types. Spinal and head injuries concurrent with chest injuries were more prevalent in the less violent Lushan earthquake. All three trauma scoring systems showed poor ability to predict respiratory failure in patients with earthquake-related chest injuries. Conclusions Previous studies may have underestimated the incidence of chest injury in violent earthquakes. The distributions of types of chest injury did not differ between these two earthquakes of different seismic intensity. Earthquake severity and interval between rescue and treatment may influence the prevalence and types of injuries that co-occur with the chest injury. Trauma evaluation scores on their own are inadequate predictors of respiratory failure in patients with earthquake-related chest injuries. PMID

  10. Pore-fluid migration and the timing of the 2005 M8.7 Nias earthquake

    USGS Publications Warehouse

    Hughes, K.L.H.; Masterlark, Timothy; Mooney, W.D.

    2011-01-01

    Two great earthquakes have occurred recently along the Sunda Trench, the 2004 M9.2 Sumatra-Andaman earthquake and the 2005 M8.7 Nias earthquake. These earthquakes ruptured over 1600 km of adjacent crust within 3 mo of each other. We quantitatively present poroelastic deformation analyses suggesting that postseismic fluid flow and recovery induced by the Sumatra-Andaman earthquake advanced the timing of the Nias earthquake. Simple back-slip simulations indicate that the megapascal (MPa)-scale pore-pressure recovery is equivalent to 7 yr of interseismic Coulomb stress accumulation near the Nias earthquake hypocenter, implying that pore-pressure recovery of the Sumatra-Andaman earthquake advanced the timing of the Nias earthquake by ~7 yr. That is, in the absence of postseismic pore-pressure recovery, we predict that the Nias earthquake would have occurred in 2011 instead of 2005. ?? 2011 Geological Society of America.

  11. Stress drops of induced and tectonic earthquakes in the central United States are indistinguishable.

    PubMed

    Huang, Yihe; Ellsworth, William L; Beroza, Gregory C

    2017-08-01

    Induced earthquakes currently pose a significant hazard in the central United States, but there is considerable uncertainty about the severity of their ground motions. We measure stress drops of 39 moderate-magnitude induced and tectonic earthquakes in the central United States and eastern North America. Induced earthquakes, more than half of which are shallower than 5 km, show a comparable median stress drop to tectonic earthquakes in the central United States that are dominantly strike-slip but a lower median stress drop than that of tectonic earthquakes in the eastern North America that are dominantly reverse-faulting. This suggests that ground motion prediction equations developed for tectonic earthquakes can be applied to induced earthquakes if the effects of depth and faulting style are properly considered. Our observation leads to the notion that, similar to tectonic earthquakes, induced earthquakes are driven by tectonic stresses.

  12. Earthquake source nucleation process in the zone of a permanently creeping deep fault

    NASA Astrophysics Data System (ADS)

    Lykov, V. I.; Mostryukov, A. O.

    2008-10-01

    The worldwide practice of earthquake prediction, whose beginning relates to the 1970s, shows that spatial manifestations of various precursors under real seismotectonic conditions are very irregular. As noted in [Kurbanov et al., 1980], zones of bending, intersection, and branching of deep faults, where conditions are favorable for increasing tangential tectonic stresses, serve as “natural amplifiers” of precursory effects. The earthquake of September 28, 2004, occurred on the Parkfield segment of the San Andreas deep fault in the area of a local bending of its plane. The fault segment about 60 km long and its vicinities are the oldest prognostic area in California. Results of observations before and after the earthquake were promptly analyzed and published in a special issue of Seismological Research Letters (2005, Vol. 76, no. 1). We have an original method enabling the monitoring of the integral rigidity of seismically active rock massifs. The integral rigidity is determined from the relative numbers of brittle and viscous failure acts during the formation of source ruptures of background earthquakes in a given massif. Fracture mechanisms are diagnosed from the steepness of the first arrival of the direct P wave. Principles underlying our method are described in [Lykov and Mostryukov, 1996, 2001, 2003]. Results of monitoring have been directly displayed at the site of the Laboratory ( http://wwwbrk.adm.yar.ru/russian/1_512/index.html ) since the mid-1990s. It seems that this information has not attracted the attention of American seismologists. This paper assesses the informativeness of the rigidity monitoring at the stage of formation of a strong earthquake source in relation to other methods.

  13. Earthquake Potential in Myanmar

    NASA Astrophysics Data System (ADS)

    Aung, Hla Hla

    Myanmar region is generally believed to be an area of high earthquake potential from the point of view of seismic activity which has been low compared to the surrounding regions like Indonesia, China, and Pakistan. Geoscientists and seismologists predicted earthquakes to occur in the area north of the Sumatra-Andaman Islands, i.e. the southwest and west part of Myanmar. Myanmar tectonic setting relative to East and SE Asia is rather peculiar and unique with different plate tectonic models but similar to the setting of western part of North America. Myanmar crustal blocks are caught within two lithospheric plates of India and Indochina experiencing oblique subduction with major dextral strike-slip faulting of the Sagaing fault. Seismic tomography and thermal structure of India plate along the Sunda subduction zone vary from south to north. Strong partitioning in central Andaman basin where crustal fragmentation and northward dispersion of Burma plate by back-arc spreading mechanism has been operating since Neogene. Northward motion of Burma plate relative to SE Asia would dock against the major continent further north and might have caused the accumulation of strain which in turn will be released as earthquakes in the future.

  14. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  15. Centrality in earthquake multiplex networks

    NASA Astrophysics Data System (ADS)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  16. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  17. Rapid Extraction of Landslide and Spatial Distribution Analysis after Jiuzhaigou Ms7.0 Earthquake Based on Uav Images

    NASA Astrophysics Data System (ADS)

    Jiao, Q. S.; Luo, Y.; Shen, W. H.; Li, Q.; Wang, X.

    2018-04-01

    Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV) and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds) algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.

  18. Fault failure with moderate earthquakes

    USGS Publications Warehouse

    Johnston, M.J.S.; Linde, A.T.; Gladwin, M.T.; Borcherdt, R.D.

    1987-01-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake (ML = 6.7, ?? = 51 km), the August 4, 1985, Kettleman Hills earthquake (ML = 5.5, ?? = 34 km), the April 1984 Morgan Hill earthquake (ML = 6.1, ?? = 55 km), the November 1984 Round Valley earthquake (ML = 5.8, ?? = 54 km), the January 14, 1978, Izu, Japan earthquake (ML = 7.0, ?? = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10-8), with borehole dilatometers (resolution 10-10) and a 3-component borehole strainmeter (resolution 10-9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure. ?? 1987.

  19. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  20. Distribution and Characteristics of Repeating Earthquakes in Northern California

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.; Zechar, J. D.; Shaw, B. E.

    2012-12-01

    show burst-like behavior with mean recurrence times smaller than one month. 5% of the RES have mean recurrence times greater than one year and include more than 10 earthquakes. Earthquakes in the 50 most periodic sequences (CV<0.2) do not appear to be predictable by either time- or slip-predictable models, consistent with previous findings. We demonstrate that changes in recurrence intervals of repeating earthquakes can be routinely monitored. This is especially important for sequences with CV~0, as they may indicate changes in the loading rate. We also present results from retrospective forecast experiments based on near-real time hazard functions.

  1. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  2. Earthquake-related versus non-earthquake-related injuries in spinal injury patients: differentiation with multidetector computed tomography

    PubMed Central

    2010-01-01

    Introduction In recent years, several massive earthquakes have occurred across the globe. Multidetector computed tomography (MDCT) is reliable in detecting spinal injuries. The purpose of this study was to compare the features of spinal injuries resulting from the Sichuan earthquake with those of non-earthquake-related spinal trauma using MDCT. Methods Features of spinal injuries of 223 Sichuan earthquake-exposed patients and 223 non-earthquake-related spinal injury patients were retrospectively compared using MDCT. The date of non-earthquake-related spinal injury patients was collected from 1 May 2009 to 22 July 2009 to avoid the confounding effects of seasonal activity and clothing. We focused on anatomic sites, injury types and neurologic deficits related to spinal injuries. Major injuries were classified according to the grid 3-3-3 scheme of the Magerl (AO) classification system. Results A total of 185 patients (82.96%) in the earthquake-exposed cohort experienced crush injuries. In the earthquake and control groups, 65 and 92 patients, respectively, had neurologic deficits. The anatomic distribution of these two cohorts was significantly different (P < 0.001). Cervical spinal injuries were more common in the control group (risk ratio (RR) = 2.12, P < 0.001), whereas lumbar spinal injuries were more common in the earthquake-related spinal injuries group (277 of 501 injured vertebrae; 55.29%). The major types of injuries were significantly different between these cohorts (P = 0.002). Magerl AO type A lesions composed most of the lesions seen in both of these cohorts. Type B lesions were more frequently seen in earthquake-related spinal injuries (RR = 1.27), while we observed type C lesions more frequently in subjects with non-earthquake-related spinal injuries (RR = 1.98, P = 0.0029). Conclusions Spinal injuries sustained in the Sichuan earthquake were located mainly in the lumbar spine, with a peak prevalence of type A lesions and a high occurrence of

  3. Directivity in NGA earthquake ground motions: Analysis using isochrone theory

    USGS Publications Warehouse

    Spudich, P.; Chiou, B.S.J.

    2008-01-01

    We present correction factors that may be applied to the ground motion prediction relations of Abrahamson and Silva, Boore and Atkinson, Campbell and Bozorgnia, and Chiou and Youngs (all in this volume) to model the azimuthally varying distribution of the GMRotI50 component of ground motion (commonly called 'directivity') around earthquakes. Our correction factors may be used for planar or nonplanar faults having any dip or slip rake (faulting mechanism). Our correction factors predict directivity-induced variations of spectral acceleration that are roughly half of the strike-slip variations predicted by Somerville et al. (1997), and use of our factors reduces record-to-record sigma by about 2-20% at 5 sec or greater period. ?? 2008, Earthquake Engineering Research Institute.

  4. Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake

    PubMed Central

    Montagner, Jean-Paul; Juhel, Kévin; Barsuglia, Matteo; Ampuero, Jean Paul; Chassande-Mottin, Eric; Harms, Jan; Whiting, Bernard; Bernard, Pascal; Clévédé, Eric; Lognonné, Philippe

    2016-01-01

    Transient gravity changes are expected to occur at all distances during an earthquake rupture, even before the arrival of seismic waves. Here we report on the search of such a prompt gravity signal in data recorded by a superconducting gravimeter and broadband seismometers during the 2011 Mw 9.0 Tohoku-Oki earthquake. During the earthquake rupture, a signal exceeding the background noise is observed with a statistical significance higher than 99% and an amplitude of a fraction of μGal, consistent in sign and order of magnitude with theoretical predictions from a first-order model. While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems imposed by the propagation speed of seismic waves. PMID:27874858

  5. Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake.

    PubMed

    Montagner, Jean-Paul; Juhel, Kévin; Barsuglia, Matteo; Ampuero, Jean Paul; Chassande-Mottin, Eric; Harms, Jan; Whiting, Bernard; Bernard, Pascal; Clévédé, Eric; Lognonné, Philippe

    2016-11-22

    Transient gravity changes are expected to occur at all distances during an earthquake rupture, even before the arrival of seismic waves. Here we report on the search of such a prompt gravity signal in data recorded by a superconducting gravimeter and broadband seismometers during the 2011 Mw 9.0 Tohoku-Oki earthquake. During the earthquake rupture, a signal exceeding the background noise is observed with a statistical significance higher than 99% and an amplitude of a fraction of μGal, consistent in sign and order of magnitude with theoretical predictions from a first-order model. While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems imposed by the propagation speed of seismic waves.

  6. Prompt gravity anomaly induced to the 2011Tohoku-Oki earthquake

    NASA Astrophysics Data System (ADS)

    Montagner, Jean-Paul; Juhel, Kevin; Barsuglia, Matteo; Ampuero, Jean-Paul; Harms, Jan; Chassande-Mottin, Eric; Whiting, Bernard; Bernard, Pascal; Clévédé, Eric; Lognonné, Philippe

    2017-04-01

    Transient gravity changes are expected to occur at all distances during an earthquake rupture, even before the arrival of seismic waves. Here we report on the search of such a prompt gravity signal in data recorded by a superconducting gravimeter and broadband seismometers during the 2011 Mw 9.0 Tohoku-Oki earthquake. During the earthquake rupture, a signal exceeding the background noise is observed with a statistical significance higher than 99% and an amplitude of a fraction of μGal, consistent in sign and order-of-magnitude with theoretical predictions from a first-order model. While prompt gravity signal detection with state-of-the-art gravimeters and seismometers is challenged by background seismic noise, its robust detection with gravity gradiometers under development could open new directions in earthquake seismology, and overcome fundamental limitations of current earthquake early-warning systems (EEWS) imposed by the propagation speed of seismic waves.

  7. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the

  8. Temporal stress changes caused by earthquakes: A review

    USGS Publications Warehouse

    Hardebeck, Jeanne L.; Okada, Tomomi

    2018-01-01

    Earthquakes can change the stress field in the Earth’s lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth’s crust at plate boundaries is “strong” or “weak.” Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.

  9. Temporal Stress Changes Caused by Earthquakes: A Review

    NASA Astrophysics Data System (ADS)

    Hardebeck, Jeanne L.; Okada, Tomomi

    2018-02-01

    Earthquakes can change the stress field in the Earth's lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth's crust at plate boundaries is "strong" or "weak." Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.

  10. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  11. Global risk of big earthquakes has not recently increased.

    PubMed

    Shearer, Peter M; Stark, Philip B

    2012-01-17

    The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences--if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past.

  12. Global risk of big earthquakes has not recently increased

    PubMed Central

    Shearer, Peter M.; Stark, Philip B.

    2012-01-01

    The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences—if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past. PMID:22184228

  13. Next-Level ShakeZoning for Earthquake Hazard Definition in Nevada

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Savran, W. H.; Flinchum, B. A.; Dudley, C.; Prina, N.; Pullammanappallil, S.; Pancha, A.

    2011-12-01

    We are developing "Next-Level ShakeZoning" procedures tailored for defining earthquake hazards in Nevada. The current Federally sponsored tools- the USGS hazard maps and ShakeMap, and FEMA HAZUS- were developed as statistical summaries to match earthquake data from California, Japan, and Taiwan. The 2008 Wells and Mogul events in Nevada showed in particular that the generalized statistical approach taken by ShakeMap cannot match actual data on shaking from earthquakes in the Intermountain West, even to first order. Next-Level ShakeZoning relies on physics and geology to define earthquake shaking hazards, rather than statistics. It follows theoretical and computational developments made over the past 20 years, to capitalize on detailed and specific local data sets to more accurately model the propagation and amplification of earthquake waves through the multiple geologic basins of the Intermountain West. Excellent new data sets are now available for Las Vegas Valley. Clark County, Nevada has completed the nation's very first effort to map earthquake hazard class systematically through an entire urban area using Optim's SeisOpt° ReMi technique, which was adapted for large-scale data collection. Using the new Parcel Map in computing shaking in the Valley for scenario earthquakes is crucial for obtaining realistic predictions of ground motions. In an educational element of the project, a dozen undergraduate students have been computing 50 separate earthquake scenarios affecting Las Vegas Valley, using the Next-Level ShakeZoning process. Despite affecting only the upper 30 meters, the Vs30 geotechnical shear-velocity from the Parcel Map shows clear effects on 3-d shaking predictions computed so far at frequencies from 0.1 Hz up to 1.0 Hz. The effect of the Parcel Map on even the 0.1-Hz waves is prominent even with the large mismatch of wavelength to geotechnical depths. Amplifications and de-amplifications affected by the Parcel Map exceed a factor of two, and are

  14. Understanding dynamic friction through spontaneously evolving laboratory earthquakes

    PubMed Central

    Rubino, V.; Rosakis, A. J.; Lapusta, N.

    2017-01-01

    Friction plays a key role in how ruptures unzip faults in the Earth’s crust and release waves that cause destructive shaking. Yet dynamic friction evolution is one of the biggest uncertainties in earthquake science. Here we report on novel measurements of evolving local friction during spontaneously developing mini-earthquakes in the laboratory, enabled by our ultrahigh speed full-field imaging technique. The technique captures the evolution of displacements, velocities and stresses of dynamic ruptures, whose rupture speed range from sub-Rayleigh to supershear. The observed friction has complex evolution, featuring initial velocity strengthening followed by substantial velocity weakening. Our measurements are consistent with rate-and-state friction formulations supplemented with flash heating but not with widely used slip-weakening friction laws. This study develops a new approach for measuring local evolution of dynamic friction and has important implications for understanding earthquake hazard since laws governing frictional resistance of faults are vital ingredients in physically-based predictive models of the earthquake source. PMID:28660876

  15. Stress drops of induced and tectonic earthquakes in the central United States are indistinguishable

    PubMed Central

    Huang, Yihe; Ellsworth, William L.; Beroza, Gregory C.

    2017-01-01

    Induced earthquakes currently pose a significant hazard in the central United States, but there is considerable uncertainty about the severity of their ground motions. We measure stress drops of 39 moderate-magnitude induced and tectonic earthquakes in the central United States and eastern North America. Induced earthquakes, more than half of which are shallower than 5 km, show a comparable median stress drop to tectonic earthquakes in the central United States that are dominantly strike-slip but a lower median stress drop than that of tectonic earthquakes in the eastern North America that are dominantly reverse-faulting. This suggests that ground motion prediction equations developed for tectonic earthquakes can be applied to induced earthquakes if the effects of depth and faulting style are properly considered. Our observation leads to the notion that, similar to tectonic earthquakes, induced earthquakes are driven by tectonic stresses. PMID:28782040

  16. The Value, Protocols, and Scientific Ethics of Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, Thomas H.

    2013-04-01

    Earthquakes are different from other common natural hazards because precursory signals diagnostic of the magnitude, location, and time of impending seismic events have not yet been found. Consequently, the short-term, localized prediction of large earthquakes at high probabilities with low error rates (false alarms and failures-to-predict) is not yet feasible. An alternative is short-term probabilistic forecasting based on empirical statistical models of seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains up to 1000 relative to long-term forecasts. The value of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing operational forecasting protocols in this sort of "low-probability environment." This paper will explore the complex interrelations among the valuation of low-probability earthquake forecasting, which must account for social intangibles; the protocols of operational forecasting, which must factor in large uncertainties; and the ethics that guide scientists as participants in the forecasting process, who must honor scientific principles without doing harm. Earthquake forecasts possess no intrinsic societal value; rather, they acquire value through their ability to influence decisions made by users seeking to mitigate seismic risk and improve community resilience to earthquake disasters. According to the recommendations of the International Commission on Earthquake Forecasting (www.annalsofgeophysics.eu/index.php/annals/article/view/5350), operational forecasting systems should appropriately separate the hazard-estimation role of scientists from the decision-making role of civil protection authorities and individuals. They should

  17. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  18. Extracting 3-D Deformation Fields from - and Right-Looking Insar with Sm-Vce Method: a Case Study of October 21, 2016 Central Tottori Earthquake

    NASA Astrophysics Data System (ADS)

    Liu, J. H.; Hu, J.; Li, Z. W.

    2018-04-01

    Three-dimensional (3-D) deformation fields with respect to the October 2016's Central Tottori earthquake are extracted in this paper from ALOS-2 conducted Interferometric Synthetic Aperture Radar (InSAR) observations with four different incline angles, i.e., ascending/descending and left-/right-looking. In particular, the Strain Model and Variance Component Estimation (SM-VCE) method is developed to integrate the heterogeneous InSAR observations without being affected by the coverage inconformity of SAR images associated with the earthquake focal area. Compare with classical weighted least squares (WLS) method, SM-VCE method is capable for the retrieval of more accurate and complete deformation field of Central Tottori earthquake, as indicated by the comparison with the GNSS observations. In addition, accuracies of heterogeneous InSAR observations and 3-D deformations on each point are quantitatively provided by the SM-VCE method.

  19. Earthquake response analysis of 11-story RC building that suffered damage in 2011 East Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Shibata, Akenori; Masuno, Hidemasa

    2017-10-01

    An eleven-story RC apartment building suffered medium damage in the 2011 East Japan earthquake and was retrofitted for re-use. Strong motion records were obtained near the building. This paper discusses the inelastic earthquake response analysis of the building using the equivalent single-degree-of-freedom (1-DOF) system to account for the features of damage. The method of converting the building frame into 1-DOF system with tri-linear reducing-stiffness restoring force characteristics was given. The inelastic response analysis of the building against the earthquake using the inelastic 1-DOF equivalent system could interpret well the level of actual damage.

  20. QuakeUp: An advanced tool for a network-based Earthquake Early Warning system

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo

    2017-04-01

    The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and

  1. Development of a borehole stress meter for studying earthquake predictions and rock mechanics, and stress seismograms of the 2011 Tohoku earthquake ( M 9.0)

    NASA Astrophysics Data System (ADS)

    Ishii, Hiroshi; Asai, Yasuhiro

    2015-02-01

    Although precursory signs of an earthquake can occur before the event, it is difficult to observe such signs with precision, especially on earth's surface where artificial noise and other factors complicate signal detection. One possible solution to this problem is to install monitoring instruments into the deep bedrock where earthquakes are likely to begin. When evaluating earthquake occurrence, it is necessary to elucidate the processes of stress accumulation in a medium and then release as a fault (crack) is generated, and to do so, the stress must be observed continuously. However, continuous observations of stress have not been implemented yet for earthquake monitoring programs. Strain is a secondary physical quantity whose variation varies depending on the elastic coefficient of the medium, and it can yield potentially valuable information as well. This article describes the development of a borehole stress meter that is capable of recording both continuous stress and strain at a depth of about 1 km. Specifically, this paper introduces the design principles of the stress meter as well as its actual structure. It also describes a newly developed calibration procedure and the results obtained to date for stress and strain studies of deep boreholes at three locations in Japan. To show examples of the observations, records of stress seismic waveforms generated by the 2011 Tohoku earthquake ( M 9.0) are presented. The results demonstrate that the stress meter data have sufficient precision and reliability.

  2. On the reported ionospheric precursor of the Hector Mine, California earthquake

    USGS Publications Warehouse

    Thomas, J.N.; Love, J.J.; Komjathy, A.; Verkhoglyadova, O.P.; Butala, M.; Rivera, N.

    2012-01-01

    Using Global Positioning System (GPS) data from sites near the 16 Oct. 1999 Hector Mine, California earthquake, Pulinets et al. (2007) identified anomalous changes in the ionospheric total electron content (TEC) starting one week prior to the earthquake. Pulinets (2007) suggested that precursory phenomena of this type could be useful for predicting earthquakes. On the other hand, and in a separate analysis, Afraimovich et al. (2004) concluded that TEC variations near the epicenter were controlled by solar and geomagnetic activity that were unrelated to the earthquake. In an investigation of these very different results, we examine TEC time series of long duration from GPS stations near and far from the epicenter of the Hector Mine earthquake, and long before and long after the earthquake. While we can reproduce the essential time series results of Pulinets et al., we find that the signal they identified as being anomalous is not actually anomalous. Instead, it is just part of normal global-scale TEC variation. We conclude that the TEC anomaly reported by Pulinets et al. is unrelated to the Hector Mine earthquake.

  3. Simulation of ground motion using the stochastic method

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.

  4. Conversion of Local and Surface-Wave Magnitudes to Moment Magnitude for Earthquakes in the Chinese Mainland

    NASA Astrophysics Data System (ADS)

    Li, X.; Gao, M.

    2017-12-01

    The magnitude of an earthquake is one of its basic parameters and is a measure of its scale. It plays a significant role in seismology and earthquake engineering research, particularly in the calculations of the seismic rate and b value in earthquake prediction and seismic hazard analysis. However, several current types of magnitudes used in seismology research, such as local magnitude (ML), surface wave magnitude (MS), and body-wave magnitude (MB), have a common limitation, which is the magnitude saturation phenomenon. Fortunately, the problem of magnitude saturation was solved by a formula for calculating the seismic moment magnitude (MW) based on the seismic moment, which describes the seismic source strength. Now the moment magnitude is very commonly used in seismology research. However, in China, the earthquake scale is primarily based on local and surface-wave magnitudes. In the present work, we studied the empirical relationships between moment magnitude (MW) and local magnitude (ML) as well as surface wave magnitude (MS) in the Chinese Mainland. The China Earthquake Networks Center (CENC) ML catalog, China Seismograph Network (CSN) MS catalog, ANSS Comprehensive Earthquake Catalog (ComCat), and Global Centroid Moment Tensor (GCMT) are adopted to regress the relationships using the orthogonal regression method. The obtained relationships are as follows: MW=0.64+0.87MS; MW=1.16+0.75ML. Therefore, in China, if the moment magnitude of an earthquake is not reported by any agency in the world, we can use the equations mentioned above for converting ML to MW and MS to MW. These relationships are very important, because they will allow the China earthquake catalogs to be used more effectively for seismic hazard analysis, earthquake prediction, and other seismology research. We also computed the relationships of and (where Mo is the seismic moment) by linear regression using the Global Centroid Moment Tensor. The obtained relationships are as follows: logMo=18

  5. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  6. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  7. Thoracic Injuries in earthquake-related versus non-earthquake-related trauma patients: differentiation via Multi-detector Computed Tomography

    PubMed Central

    Dong, Zhi-hui; Yang, Zhi-gang; Chen, Tian-wu; Chu, Zhi-gang; Deng, Wen; Shao, Heng

    2011-01-01

    PURPOSE: Massive earthquakes are harmful to humankind. This study of a historical cohort aimed to investigate the difference between earthquake-related crush thoracic traumas and thoracic traumas unrelated to earthquakes using a multi-detector Computed Tomography (CT). METHODS: We retrospectively compared an earthquake-exposed cohort of 215 thoracic trauma crush victims of the Sichuan earthquake to a cohort of 215 non-earthquake-related thoracic trauma patients, focusing on the lesions and coexisting injuries to the thoracic cage and the pulmonary parenchyma and pleura using a multi-detector CT. RESULTS: The incidence of rib fracture was elevated in the earthquake-exposed cohort (143 vs. 66 patients in the non-earthquake-exposed cohort, Risk Ratio (RR) = 2.2; p<0.001). Among these patients, those with more than 3 fractured ribs (106/143 vs. 41/66 patients, RR = 1.2; p<0.05) or flail chest (45/143 vs. 11/66 patients, RR = 1.9; p<0.05) were more frequently seen in the earthquake cohort. Earthquake-related crush injuries more frequently resulted in bilateral rib fractures (66/143 vs. 18/66 patients, RR = 1.7; p<0.01). Additionally, the incidence of non-rib fracture was higher in the earthquake cohort (85 vs. 60 patients, RR = 1.4; p<0.01). Pulmonary parenchymal and pleural injuries were more frequently seen in earthquake-related crush injuries (117 vs. 80 patients, RR = 1.5 for parenchymal and 146 vs. 74 patients, RR = 2.0 for pleural injuries; p<0.001). Non-rib fractures, pulmonary parenchymal and pleural injuries had significant positive correlation with rib fractures in these two cohorts. CONCLUSIONS: Thoracic crush traumas resulting from the earthquake were life threatening with a high incidence of bony thoracic fractures. The ribs were frequently involved in bilateral and severe types of fractures, which were accompanied by non-rib fractures, pulmonary parenchymal and pleural injuries. PMID:21789386

  8. Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps

    NASA Astrophysics Data System (ADS)

    Brooks, E. M.; Stein, S. A.; Spencer, B. D.

    2015-12-01

    The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.

  9. A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.

    2014-12-01

    A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.

  10. Genetic algorithm for TEC seismo-ionospheric anomalies detection around the time of the Solomon (Mw = 8.0) earthquake of 06 February 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-08-01

    On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.

  11. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  12. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  13. Characteristics of strong motions and damage implications of M S6.5 Ludian earthquake on August 3, 2014

    NASA Astrophysics Data System (ADS)

    Xu, Peibin; Wen, Ruizhi; Wang, Hongwei; Ji, Kun; Ren, Yefei

    2015-02-01

    The Ludian County of Yunnan Province in southwestern China was struck by an M S6.5 earthquake on August 3, 2014, which was another destructive event following the M S8.0 Wenchuan earthquake in 2008, M S7.1 Yushu earthquake in 2010, and M S7.0 Lushan earthquake in 2013. National Strong-Motion Observation Network System of China collected 74 strong motion recordings, which the maximum peak ground acceleration recorded by the 053LLT station in Longtoushan Town was 949 cm/s2 in E-W component. The observed PGAs and spectral ordinates were compared with ground-motion prediction equation in China and the NGA-West2 developed by Pacific Earthquake Engineering Researcher Center. This earthquake is considered as the first case for testing applicability of NGA-West2 in China. Results indicate that the observed PGAs and the 5 % damped pseudo-response spectral accelerations are significantly lower than the predicted ones. The field survey around some typical strong motion stations verified that the earthquake damage was consistent with the official isoseismal by China Earthquake Administration.

  14. A Comparison of Earthquake Back-Projection Imaging Methods for Dense Local Arrays, and Application to the 2011 Virginia Aftershock Sequence

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.

    2016-12-01

    Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.

  15. Global catalog of earthquake rupture velocities shows anticorrelation between stress drop and rupture velocity

    NASA Astrophysics Data System (ADS)

    Chounet, Agnès; Vallée, Martin; Causse, Mathieu; Courboulex, Françoise

    2018-05-01

    Application of the SCARDEC method provides the apparent source time functions together with seismic moment, depth, and focal mechanism, for most of the recent earthquakes with magnitude larger than 5.6-6. Using this large dataset, we have developed a method to systematically invert for the rupture direction and average rupture velocity Vr, when unilateral rupture propagation dominates. The approach is applied to all the shallow (z < 120 km) earthquakes of the catalog over the 1992-2015 time period. After a careful validation process, rupture properties for a catalog of 96 earthquakes are obtained. The subsequent analysis of this catalog provides several insights about the seismic rupture process. We first report that up-dip ruptures are more abundant than down-dip ruptures for shallow subduction interface earthquakes, which can be understood as a consequence of the material contrast between the slab and the overriding crust. Rupture velocities, which are searched without any a-priori up to the maximal P wave velocity (6000-8000 m/s), are found between 1200 m/s and 4500 m/s. This observation indicates that no earthquakes propagate over long distances with rupture velocity approaching the P wave velocity. Among the 23 ruptures faster than 3100 m/s, we observe both documented supershear ruptures (e.g. the 2001 Kunlun earthquake), and undocumented ruptures that very likely include a supershear phase. We also find that the correlation of Vr with the source duration scaled to the seismic moment (Ts) is very weak. This directly implies that both Ts and Vr are anticorrelated with the stress drop Δσ. This result has implications for the assessment of the peak ground acceleration (PGA) variability. As shown by Causse and Song (2015), an anticorrelation between Δσ and Vr significantly reduces the predicted PGA variability, and brings it closer to the observed variability.

  16. Precise Relative Earthquake Magnitudes from Cross Correlation

    DOE PAGES

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  17. Finite Source Inversion for Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Parker, J. M.; Glaser, S. D.

    2017-12-01

    We produce finite source inversion results for laboratory earthquakes (LEQ) in PMMA confirmed by video recording of the fault contact. The LEQs are generated under highly controlled laboratory conditions and recorded by an array of absolutely calibrated acoustic emissions (AE) sensors. Following the method of Hartzell and Heaton (1983), we develop a solution using only the single-component AE sensors common in laboratory experiments. A set of calibration tests using glass capillary sources of varying size resolves the material characteristics and synthetic Green's Functions such that uncertainty in source location is reduced to 3σ<1mm; typical source radii are 1mm. Well-isolated events with corner frequencies on the order of 0.1 MHz (Mw -6) are recorded at 20 MHz and initially band-pass filtered from 0.1 to 1.0 MHz; in comparison, large earthquakes with corner frequencies around 0.1 Hz are commonly filtered from 0.1 to 1.0 Hz. We compare results of the inversion and video recording to slip distribution predicted by the Cattaneo partial slip asperity and numerical modeling. Not all asperities are large enough to resolve individually so some results must be interpreted as the smoothed effects of clusters of tiny contacts. For large asperities, partial slip is observed originating at the asperity edges and moving inward as predicted by the theory. Furthermore, expanding shear rupture fronts are observed as they reach resistive patches of asperities and halt or continue, depending on the relative energies of rupture and resistance.

  18. Defining "Acceptable Risk" for Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Tucker, B.

    2001-05-01

    The greatest and most rapidly growing earthquake risk for mortality is in developing countries. Further, earthquake risk management actions of the last 50 years have reduced the average lethality of earthquakes in earthquake-threatened industrialized countries. (This is separate from the trend of the increasing fiscal cost of earthquakes there.) Despite these clear trends, every new earthquake in developing countries is described in the media as a "wake up" call, announcing the risk these countries face. GeoHazards International (GHI) works at both the community and the policy levels to try to reduce earthquake risk. GHI reduces death and injury by helping vulnerable communities recognize their risk and the methods to manage it, by raising awareness of its risk, building local institutions to manage that risk, and strengthening schools to protect and train the community's future generations. At the policy level, GHI, in collaboration with research partners, is examining whether "acceptance" of these large risks by people in these countries and by international aid and development organizations explains the lack of activity in reducing these risks. The goal of this pilot project - The Global Earthquake Safety Initiative (GESI) - is to develop and evaluate a means of measuring the risk and the effectiveness of risk mitigation actions in the world's largest, most vulnerable cities: in short, to develop an earthquake risk index. One application of this index is to compare the risk and the risk mitigation effort of "comparable" cities. By this means, Lima, for example, can compare the risk of its citizens dying due to earthquakes with the risk of citizens in Santiago and Guayaquil. The authorities of Delhi and Islamabad can compare the relative risk from earthquakes of their school children. This index can be used to measure the effectiveness of alternate mitigation projects, to set goals for mitigation projects, and to plot progress meeting those goals. The preliminary

  19. A review on remotely sensed land surface temperature anomaly as an earthquake precursor

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anshuman; Singh, Shaktiman; Sam, Lydia; Joshi, P. K.; Bhardwaj, Akanksha; Martín-Torres, F. Javier; Kumar, Rajesh

    2017-12-01

    The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.

  20. Radon anomaly in soil gas as an earthquake precursor.

    PubMed

    Miklavcić, I; Radolić, V; Vuković, B; Poje, M; Varga, M; Stanić, D; Planinić, J

    2008-10-01

    The mechanical processes of earthquake preparation are always accompanied by deformations; afterwards, the complex short- or long-term precursory phenomena can appear. Anomalies of radon concentrations in soil gas are registered a few weeks or months before many earthquakes. Radon concentrations in soil gas were continuously measured by the LR-115 nuclear track detectors at site A (Osijek) during a 4-year period, as well as by the Barasol semiconductor detector at site B (Kasina) during 2 years. We investigated the influence of the meteorological parameters on the temporal radon variations, and we determined the equation of the multiple regression that enabled the reduction (deconvolution) of the radon variation caused by the barometric pressure, rainfall and temperature. The pre-earthquake radon anomalies at site A indicated 46% of the seismic events, on criterion M>or=3, R<200 km, and 21% at site B. Empirical equations between earthquake magnitude, epicenter distance and precursor time enabled estimation or prediction of an earthquake that will rise at the epicenter distance R from the monitoring site in expecting precursor time T.

  1. Urban Earthquakes - Reducing Building Collapse Through Education

    NASA Astrophysics Data System (ADS)

    Bilham, R.

    2004-12-01

    Fatalities from earthquakes rose from 6000k to 9000k/year in the past decade, yet the ratio of numbers of earthquake fatalities to instantaneous population continues to fall. Since 1950 the ratio declined worldwide by a factor of three, but in some countries the ratio has changed little. E.g in Iran, 1 in 3000 people can expect to die in an earthquake, a percentage that has not changed significantly since 1890. Fatalities from earthquakes remain high in those countries that have traditionally suffered from frequent large earthquakes (Turkey, Iran, Japan, and China), suggesting that the exposure time of recently increased urban populations in other countries may be too short to have interacted with earthquakes with long recurrence intervals. This in turn, suggests that disasters of unprecendented size will occur (more than 1 million fatalities) when future large earthquakes occur close to megacities. However, population growth is most rapid in cities of less than 1 million people in the developing nations, where the financial ability to implement earthquake resistant construction methods is limited. In that structural collapse can often be traced to ignorance about the forces at work in an earthquake, the future collapse of buildings presently under construction could be much reduced were contractors, builders and occupants educated in the principles of earthquake resistant assembly. Education of builders who are tempted to cut assembly costs is likely to be more cost effective than material aid.

  2. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  3. Comparative Analysis of Peak Ground Acceleration Before and After Padang Earthquake 2009 Using Mc. Guirre Method

    NASA Astrophysics Data System (ADS)

    Ayu Rahmalia, Diah; Nilamprasasti, Hesti

    2017-04-01

    We have analyzed the earthquakes data in West Sumatra province to determine peak ground acceleration value. The peak ground acceleration is a parameter that describes the strength of the tremor that ever happened. This paper aims to compare the value of the peak ground acceleration by considering the b-value before and after the Padang earthquake 2009. This research was carried out in stages, starting by taking the earthquake data in West Sumatra province with boundary coordinates 0.923° LU - 2.811° LS and 97.075° - 102.261° BT, before and after the 2009 Padang earthquake with a magnitude ≥ 3 and depth of ≤ 300 km, calculation of the b-value, and ended by creating peak ground acceleration map based on Mc. Guirre empirical formula with Excel and Surfer software. Based on earthquake data from 2002 until before Padang earthquake 2009, the b-value is 0.874 while the b-value after the Padang earthquake in 2009 to 2016 is 0.891. Considering b value, it can be known that peak ground acceleration before and after the 2009 Padang earthquake might be different. Based on the seismic data before 2009, the peak ground acceleration value of West Sumatra province is ranged from 7,002 to 308.875 gal. This value will be compared by the value of the peak ground acceleration after the Padang earthquake in 2009 which ranged from 7,946 to 372,736 gal.

  4. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model

  5. Predicting casualties implied by TIPs

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Wyss, M.; Wyss, B. M.

    2009-12-01

    When an earthquake is predicted, forecast, or expected with a higher than normal probability, losses are implied. We estimated the casualties (fatalities plus injured) that should be expected if earthquakes in TIPs (locations of Temporarily Increased Probability of earthquakes) defined by Kossobokov et al. (2009) should occur. We classified the predictions of losses into the categories red (more than 400 fatalities or more than 1,000 injured), yellow (between 100 and 400 fatalities), green (fewer than 100 fatalities), and gray (undetermined). TIPs in Central Chile, the Philippines, Papua, and Taiwan are in the red class, TIPs in Southern Sumatra, Nicaragua, Vanatu, and Honshu in the yellow class, and TIPs in Tonga, Loyalty Islands, Vanatu, S. Sandwich Islands, Banda Sea, and the Kuriles, are classified as green. TIPs where the losses depend moderately on the assumed point of major energy release were classified as yellow; TIPs such as in the Talaud Islands and in Tonga, where the losses depend very strongly on the location of the epicenter, were classified as gray. The accuracy of loss estimates after earthquakes with known hypocenter and magnitude are affected by uncertainties in transmission and soil properties, the composition of the building stock, the population present, and the method by which the numbers of casualties are calculated. In the case of TIPs, uncertainties in magnitude and location are added, thus we calculate losses for a range of these two parameters. Therefore, our calculations can only be considered order of magnitude estimates. Nevertheless, our predictions can come to within a factor of two of the observed numbers, as in the case of the M7.6 earthquake of October 2005 in Pakistan that resulted in 85,000 fatalities (Wyss, 2005). In subduction zones, the geometrical relationship between the earthquake source capable of a great earthquake and the population is clear because there is only one major fault plane available, thus the epicentral

  6. Evaluating earthquake hazards in the Los Angeles region; an earth-science perspective

    USGS Publications Warehouse

    Ziony, Joseph I.

    1985-01-01

    Potentially destructive earthquakes are inevitable in the Los Angeles region of California, but hazards prediction can provide a basis for reducing damage and loss. This volume identifies the principal geologically controlled earthquake hazards of the region (surface faulting, strong shaking, ground failure, and tsunamis), summarizes methods for characterizing their extent and severity, and suggests opportunities for their reduction. Two systems of active faults generate earthquakes in the Los Angeles region: northwest-trending, chiefly horizontal-slip faults, such as the San Andreas, and west-trending, chiefly vertical-slip faults, such as those of the Transverse Ranges. Faults in these two systems have produced more than 40 damaging earthquakes since 1800. Ninety-five faults have slipped in late Quaternary time (approximately the past 750,000 yr) and are judged capable of generating future moderate to large earthquakes and displacing the ground surface. Average rates of late Quaternary slip or separation along these faults provide an index of their relative activity. The San Andreas and San Jacinto faults have slip rates measured in tens of millimeters per year, but most other faults have rates of about 1 mm/yr or less. Intermediate rates of as much as 6 mm/yr characterize a belt of Transverse Ranges faults that extends from near Santa Barbara to near San Bernardino. The dimensions of late Quaternary faults provide a basis for estimating the maximum sizes of likely future earthquakes in the Los Angeles region: moment magnitude .(M) 8 for the San Andreas, M 7 for the other northwest-trending elements of that fault system, and M 7.5 for the Transverse Ranges faults. Geologic and seismologic evidence along these faults, however, suggests that, for planning and designing noncritical facilities, appropriate sizes would be M 8 for the San Andreas, M 7 for the San Jacinto, M 6.5 for other northwest-trending faults, and M 6.5 to 7 for the Transverse Ranges faults. The

  7. SMSIM--Fortran programs for simulating ground motions from earthquakes: Version 2.0.--a revision of OFR 96-80-A

    USGS Publications Warehouse

    Boore, David M.

    2000-01-01

    A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.

  8. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  9. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  10. Earthquakes drive focused denudation along a tectonically active mountain front

    NASA Astrophysics Data System (ADS)

    Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Zhang, Fei; Wang, Jin; Clark, Marin; Hilton, Robert G.

    2017-08-01

    Earthquakes cause widespread landslides that can increase erosional fluxes observed over years to decades. However, the impact of earthquakes on denudation over the longer timescales relevant to orogenic evolution remains elusive. Here we assess erosion associated with earthquake-triggered landslides in the Longmen Shan range at the eastern margin of the Tibetan Plateau. We use the Mw 7.9 2008 Wenchuan and Mw 6.6 2013 Lushan earthquakes to evaluate how seismicity contributes to the erosional budget from short timescales (annual to decadal, as recorded by sediment fluxes) to long timescales (kyr to Myr, from cosmogenic nuclides and low temperature thermochronology). Over this wide range of timescales, the highest rates of denudation in the Longmen Shan coincide spatially with the region of most intense landsliding during the Wenchuan earthquake. Across sixteen gauged river catchments, sediment flux-derived denudation rates following the Wenchuan earthquake are closely correlated with seismic ground motion and the associated volume of Wenchuan-triggered landslides (r2 > 0.6), and to a lesser extent with the frequency of high intensity runoff events (r2 = 0.36). To assess whether earthquake-induced landsliding can contribute importantly to denudation over longer timescales, we model the total volume of landslides triggered by earthquakes of various magnitudes over multiple earthquake cycles. We combine models that predict the volumes of landslides triggered by earthquakes, calibrated against the Wenchuan and Lushan events, with an earthquake magnitude-frequency distribution. The long-term, landslide-sustained "seismic erosion rate" is similar in magnitude to regional long-term denudation rates (∼0.5-1 mm yr-1). The similar magnitude and spatial coincidence suggest that earthquake-triggered landslides are a primary mechanism of long-term denudation in the frontal Longmen Shan. We propose that the location and intensity of seismogenic faulting can contribute to

  11. Children's emotional experience two years after an earthquake: An exploration of knowledge of earthquakes and associated emotions

    PubMed Central

    Burro, Roberto; Hall, Rob

    2017-01-01

    A major earthquake has a potentially highly traumatic impact on children’s psychological functioning. However, while many studies on children describe negative consequences in terms of mental health and psychiatric disorders, little is known regarding how the developmental processes of emotions can be affected following exposure to disasters. Objectives We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children’s emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. Method The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children’s understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. Results We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Conclusions Our data extend the generalizability of theoretical models on children’s psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and

  12. Limiting the effects of earthquakes on gravitational-wave interferometers

    USGS Publications Warehouse

    Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew

    2017-01-01

    Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.

  13. Limiting the effects of earthquakes on gravitational-wave interferometers

    NASA Astrophysics Data System (ADS)

    Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew

    2017-02-01

    Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.

  14. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  15. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the

  16. The global distribution of magnitude 9 earthquakes

    NASA Astrophysics Data System (ADS)

    McCaffrey, R.

    2011-12-01

    The 2011 Tohoku M9 earthquake once again caught some in the earthquake community by surprise. The expectation of these massive quakes has been driven in the past by the over-reliance on our short, incomplete history of earthquakes and causal relationships derived from it. The logic applied is that if a great earthquake has not happened in the past, that we know of, one cannot happen in the future. Using the ~100-year global earthquake history, seismologists have promoted relationships between maximum earthquake sizes and other properties of subduction zones, leading to the notion that some subduction zones, like the Japan Trench, would never produce a magnitude ~9 event. The 2004 Andaman Mw = 9.2 earthquake, that occurred where there is slow subduction of old crust and a history of only moderate-sized earthquakes, seriously undermined such ideas. Given multi-century return times of the greatest earthquakes, ignorance of those return times and our very limited observation span, I suggest that we cannot yet make such determinations. Alternatively, using the length of a subduction zone that is available for slip as the predominant factor in determining maximum earthquake size, we cannot rule out that any subduction zone of a few hundred kilometers or more in length may be capable of producing a magnitude 9 or larger earthquake. Based on this method, the expected maximum size for the Japan Trench was 9.0 (McCaffrey, Geology, p. 263, 2008). The same approach portends a M > 9 for Java, with twice the population density as Honshu and much lower building standards. The Java Trench, and others where old crust subducts (Hikurangi, Marianas, Tonga, Kermadec), require increased awareness of the possibility for a great earthquake.

  17. Ionospheric anomalies detected by ionosonde and possibly related to crustal earthquakes in Greece

    NASA Astrophysics Data System (ADS)

    Perrone, Loredana; De Santis, Angelo; Abbattista, Cristoforo; Alfonsi, Lucilla; Amoruso, Leonardo; Carbone, Marianna; Cesaroni, Claudio; Cianchini, Gianfranco; De Franceschi, Giorgiana; De Santis, Anna; Di Giovambattista, Rita; Marchetti, Dedalo; Pavòn-Carrasco, Francisco J.; Piscini, Alessandro; Spogli, Luca; Santoro, Francesca

    2018-03-01

    Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003-2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h'Es, foEs) and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquake prediction is finally discussed.

  18. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  19. Modeling the behavior of an earthquake base-isolated building.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coveney, V. A.; Jamil, S.; Johnson, D. E.

    1997-11-26

    Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.

  20. Citizen Seismology Provides Insights into Ground Motions and Hazard from Injection-Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Hough, S. E.

    2014-12-01

    The US Geological Survey "Did You Feel It?" (DYFI) system is a highly successful example of citizen seismology. Users around the world now routinely report felt earthquakes via the Web; this information is used to determine Community Decimal Intensity values. These data can be enormously valuable for helping address a key issue that has arisen recently: quantifying the shaking/hazard associated with injection-induced earthquakes. I consider the shaking from 11 moderate (Mw3.9-5.7) earthquakes in the central and eastern United States that are believed to be induced by fluid injection. The distance decay of intensities for all events is consistent with that observed for regional tectonic earthquakes, but for all of the events intensities are lower than values predicted from an intensity prediction equation derived using data from tectonic events. I introduce an effective intensity magnitude, MIE, defined as the magnitude that on average would generate a given intensity distribution. For all 11 events, MIE is lower than the event magnitude by 0.4-1.3 units, with an average difference of 0.8 units. This suggests that stress drops of injection-induced earthquakes are lower than tectonic earthquakes by a factor of 2-10. However, relatively limited data suggest that intensities for epicentral distances less than 10 km are more commensurate with expectations for the event magnitude, which can be explained by the shallow focal depth of the events. The results suggest that damage from injection-induced earthquakes will be especially concentrated in the immediate epicentral region. These results further suggest a potential new discriminant for the identification of induced events. For ecample, while systematic analysis of California earthquakes remains to be done, DYFI data from the 2014 Mw5.1 La Habra, California, earthquake reveal no evidence for unusually low intensities, adding to a growing volume of evidence that this was a natural tectonic event.