Science.gov

Sample records for earthquake magnitude prediction

  1. A probabilistic neural network for earthquake magnitude prediction.

    PubMed

    Adeli, Hojjat; Panakkat, Ashif

    2009-09-01

    A probabilistic neural network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0.

  2. Are Earthquake Magnitudes Clustered?

    SciTech Connect

    Davidsen, Joern; Green, Adam

    2011-03-11

    The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.

  3. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  4. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries.

    PubMed

    Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.

  5. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries

    PubMed Central

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

  6. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  7. Neural network models for earthquake magnitude prediction using multiple seismicity indicators.

    PubMed

    Panakkat, Ashif; Adeli, Hojjat

    2007-02-01

    Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.

  8. Near-Source Recordings of Small and Large Earthquakes: Magnitude Predictability only for Medium and Small Events

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2015-12-01

    The feasibility of Earthquake Early Warning (EEW) applications has revived the discussion on whether earthquake rupture development follows deterministic principles or not. If it does, it may be possible to predict final earthquake magnitudes while the rupture is still developing. EEW magnitude estimation schemes, most of which are based on 3-4 seconds of near-source p-wave data, have been shown to work well for small to moderate size earthquakes. In this magnitude range, the used time window is larger than the source durations of the events. Whether the magnitude estimation schemes also work for events in which the source duration exceeds the estimation time window, however, remains debated. In our study we have compiled an extensive high-quality data set of near-source seismic recordings. We search for waveform features that could be diagnostic of final event magnitudes in a predictive sense. We find that the onsets of large (M7+) events are statistically indistinguishable from those of medium sized events (M5.5-M7). Significant differences arise only once the medium size events terminate. This observation suggests that EEW relevant magnitude estimates are largely observational, rather than predictive, and that whether a medium size event becomes a large one is not determined at the rupture onset. As a consequence, early magnitude estimates for large events are minimum estimates, a fact that has to be taken into account in EEW alert messaging and response design.

  9. Seismic Network Performance Estimation: Comparing Predictions of Magnitude of Completeness and Location Accuracy to Observations from an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Greig, D. W.; Ackerley, N. J.

    2014-12-01

    The design of seismic networks for the monitoring of induced seismicity is of critical importance. The recent introduction of regulations in various locations around the world (with more upcoming) has created a need for a priori confirmation that certain performance standards are met. We develop a tool to assess two key measures of network performance without an earthquake catalogue: magnitude of completeness and location accuracy. Site noise measurements are taken at existing seismic stations or as part of a noise survey. We then interpolate between measured values to determine a noise map for the entire region. The site noise is then summed with the instrument noise to determine the effective station noise at each of the proposed station locations. Location accuracy is evaluated by generating a covariance matrix that represents the error ellipsoid from the travel time derivatives (Peters and Crosson, 1972). To determine the magnitude of completeness we assume isotropic radiation and mandate a minimum signal to noise ratio for detection. For every gridpoint, we compute the Brune spectra for synthetic events and iterate to determine the smallest magnitude event that can be detected by at least four stations. We apply this methodology to an example network. We predict the magnitude of completeness and the location accuracy and compare the predicted values to observed values generated from the existing earthquake catalogue for the network. We discuss the effects of hypothetical station additions and removals on network performance to simulate network expansions and station failures. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables prediction of network performance even for a purely hypothetical seismic network. This allows the operators of networks for induced seismicity monitoring to be confident that performance criteria are met from day one of operations.

  10. Earthquake prediction

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.

  11. Earthquake prediction

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.

  12. Earthquake Prediction and Forecasting

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Prospects for earthquake prediction and forecasting, and even their definitions, are actively debated. Here, "forecasting" means estimating the future earthquake rate as a function of location, time, and magnitude. Forecasting becomes "prediction" when we identify special conditions that make the immediate probability much higher than usual and high enough to justify exceptional action. Proposed precursors run from aeronomy to zoology, but no identified phenomenon consistently precedes earthquakes. The reported prediction of the 1975 Haicheng, China earthquake is often proclaimed as the most successful, but the success is questionable. An earthquake predicted to occur near Parkfield, California in 1988±5 years has not happened. Why is prediction so hard? Earthquakes start in a tiny volume deep within an opaque medium; we do not know their boundary conditions, initial conditions, or material properties well; and earthquake precursors, if any, hide amongst unrelated anomalies. Earthquakes cluster in space and time, and following a quake earthquake probability spikes. Aftershocks illustrate this clustering, and later earthquakes may even surpass earlier ones in size. However, the main shock in a cluster usually comes first and causes the most damage. Specific models help reveal the physics and allow intelligent disaster response. Modeling stresses from past earthquakes may improve forecasts, but this approach has not yet been validated prospectively. Reliable prediction of individual quakes is not realistic in the foreseeable future, but probabilistic forecasting provides valuable information for reducing risk. Recent studies are also leading to exciting discoveries about earthquakes.

  13. Nucleation process of magnitude 2 repeating earthquakes on the San Andreas Fault predicted by rate-and-state fault models with SAFOD drill core data

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Carpenter, Brett M.; Nielsen, Stefan B.

    2017-01-01

    Recent laboratory shear-slip experiments conducted on a nominally flat frictional interface reported the intriguing details of a two-phase nucleation of stick-slip motion that precedes the dynamic rupture propagation. This behavior was subsequently reproduced by a physics-based model incorporating laboratory-derived rate-and-state friction laws. However, applying the laboratory and theoretical results to the nucleation of crustal earthquakes remains challenging due to poorly constrained physical and friction properties of fault zone rocks at seismogenic depths. Here we apply the same physics-based model to simulate the nucleation process of crustal earthquakes using unique data acquired during the San Andreas Fault Observatory at Depth (SAFOD) experiment and new and existing measurements of friction properties of SAFOD drill core samples. Using this well-constrained model, we predict what the nucleation phase will look like for magnitude ˜2 repeating earthquakes on segments of the San Andreas Fault at a 2.8 km depth. We find that despite up to 3 orders of magnitude difference in the physical and friction parameters and stress conditions, the behavior of the modeled nucleation is qualitatively similar to that of laboratory earthquakes, with the nucleation consisting of two distinct phases. Our results further suggest that precursory slow slip associated with the earthquake nucleation phase may be observable in the hours before the occurrence of the magnitude ˜2 earthquakes by strain measurements close (a few hundred meters) to the hypocenter, in a position reached by the existing borehole.

  14. Earthquake Prediction: Absence of a Precursive Change in Seismic Velocities before a Tremor of Magnitude 3frac34.

    PubMed

    McGarr, A

    1974-09-20

    P-wave velocities in the region near the source of a tremor of magnitude 3(3/4) were constant to within 2 percent for 41 days before the event; no evidence of a precursive change in velocity was found. Observations of S-wave velocities and the ratio of P-wave to S-wave velocities also showed no precursive changes. In recent studies, premonitory changes in body-wave velocities of about 10 percent and having a duration of 2 to 3 weeks have been reported for crustal earthquakes of this size.

  15. Predictable earthquakes?

    NASA Astrophysics Data System (ADS)

    Martini, D.

    2002-12-01

    acceleration) and global number of earthquake for this period from published literature which give us a great picture about the dynamical geophysical phenomena. Methodology: The computing of linear correlation coefficients gives us a chance to quantitatively characterise the relation among the data series, if we suppose a linear dependence in the first step. The correlation coefficients among the Earth's rotational acceleration and Z-orbit acceleration (perpendicular to the ecliptic plane) and the global number of the earthquakes were compared. The results clearly demonstrate the common feature of both the Earth's rotation and Earth's Z-acceleration around the Sun and also between the Earth's rotational acceleration and the earthquake number. This fact might means a strong relation among these phenomena. The mentioned rather strong correlation (r = 0.75) and the 29 year period (Saturn's synodic period) was clearly shown in the counted cross correlation function, which gives the dynamical characteristic of correlation, of Earth's orbital- (Z-direction) and rotational acceleration. This basic period (29 year) was also obvious in the earthquake number data sets with clear common features in time. Conclusion: The Core, which involves the secular variation of the Earth's magnetic field, is the only sufficiently mobile part of the Earth with a sufficient mass to modify the rotation which probably effects on the global time distribution of the earthquakes. Therefore it might means that the secular variation of the earthquakes is inseparable from the changes in Earth's magnetic field, i.e. the interior process of the Earth's core belongs to the dynamical state of the solar system. Therefore if the described idea is real the global distribution of the earthquakes in time is predictable.

  16. Magnitude Dependent Seismic Quiescence of 2008 Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sacks, S. I.; Takanami, T.; Smith, D. E.; Rydelek, P. A.

    2014-12-01

    The change in seismicity leading to the Wenchuan Earthquake in 2008 (Mw 7.9) has been studied by various authors based on statistics and/or pattern recognitions (Huang, 2008; Yan et al., 2009; Chen and Wang, 2010; Yi et al., 2011). We show, in particular, that the magnitude-dependent seismic quiescence is observed for the Wenchuan earthquake and that it adds to other similar observations. Such studies on seismic quiescence prior to major earthquakes include 1982 Urakawa-Oki earthquake (M 7.1) (Taylor et al., 1992), 1994 Hokkaido-Toho-Oki earthquake (Mw=8.2) (Takanami et al., 1996), 2011 Tohoku earthquake (Mw=9.0) (Katsumata, 2011). Smith and Sacks (2013) proposed a magnitude-dependent quiescence based on a physical earthquake model (Rydelek and Sacks, 1995) and demonstrated the quiescence can be reproduced by the introduction of "asperities" (dilantacy hardened zones). Actual observations indicate the change occurs in a broader area than the eventual earthquake fault zone. In order to accept the explanation, we need to verify the model as the model predicts somewhat controversial features of earthquakes such as the magnitude dependent stress drop at lower magnitude range or the dynamically appearing asperities and repeating slips in some parts of the rupture zone. We show supportive observations. We will also need to verify the dilatancy diffusion to be taking place. So far, we only seem to have indirect evidences, which need to be more quantitatively substantiated.

  17. BNL PREDICTION OF NUPECS FIELD MODEL TESTS OF NPP STRUCTURES SUBJECT TO SMALL TO MODERATE MAGNITUDE EARTHQUAKES.

    SciTech Connect

    XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.

    2003-08-17

    As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.

  18. BNL PREDICTION OF NUPECS FIELD MODEL TESTS OF NPP STRUCTURES SUBJECT TO SMALL TO MODERATE MAGNITUDE EARTHQUAKES.

    SciTech Connect

    XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.

    2003-08-17

    As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure Interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.

  19. Earthquake prediction, societal implications

    NASA Astrophysics Data System (ADS)

    Aki, Keiiti

    1995-07-01

    "If I were a brilliant scientist, I would be working on earthquake prediction." This is a statement from a Los Angeles radio talk show I heard just after the Northridge earthquake of January 17, 1994. Five weeks later, at a monthly meeting of the Southern California Earthquake Center (SCEC), where more than two hundred scientists and engineers gathered to exchange notes on the earthquake, a distinguished French geologist who works on earthquake faults in China envied me for working now in southern California. This place is like northeastern China 20 years ago, when high seismicity and research activities led to the successful prediction of the Haicheng earthquake of February 4, 1975 with magnitude 7.3. A difficult question still haunting us [Aki, 1989] is whether the Haicheng prediction was founded on the physical reality of precursory phenomena or on the wishful thinking of observers subjected to the political pressure which encouraged precursor reporting. It is, however, true that a successful life-saving prediction like the Haicheng prediction can only be carried out by the coordinated efforts of decision makers and physical scientists.

  20. Strong motion duration and earthquake magnitude relationships

    SciTech Connect

    Salmon, M.W.; Short, S.A.; Kennedy, R.P.

    1992-06-01

    Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.

  1. Maximum magnitude earthquakes induced by fluid injection

    NASA Astrophysics Data System (ADS)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  2. Maximum magnitude earthquakes induced by fluid injection

    USGS Publications Warehouse

    McGarr, Arthur F.

    2014-01-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  3. Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2016-07-01

    We have obtained new results in the statistical analysis of global earthquake catalogues with special attention to the largest earthquakes, and we examined the statistical behaviour of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the `Global Earthquake Activity Rate 1' model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 × 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalogue from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a `General Earthquake Model' (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalogue of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalogue of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation

  4. Precise Relative Earthquake Magnitudes from Cross Correlation

    DOE PAGES

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  5. Precise Relative Earthquake Magnitudes from Cross Correlation

    SciTech Connect

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  6. A new macroseismic intensity prediction equation and magnitude estimates of the 1811-1812 New Madrid and 1886 Charleston, South Carolina, earthquakes

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2013-12-01

    We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial

  7. An empirical evolutionary magnitude estimation for early warning of earthquakes

    NASA Astrophysics Data System (ADS)

    Chen, Da-Yi; Wu, Yih-Min; Chin, Tai-Lin

    2017-03-01

    The earthquake early warning (EEW) system is difficult to provide consistent magnitude estimate in the early stage of an earthquake occurrence because only few stations are triggered and few seismic signals are recorded. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters using the initial portion of the recorded waveforms after P-wave arrival. However, for a large-magnitude earthquake (Mw > 7.0), the time to complete the whole ruptures resulted from the corresponding fault may be very long. The magnitude estimations may not be correctly predicted by the initial portion of the seismograms. To estimate the magnitude of a large earthquake in real-time, the amplitude parameters should be updated with ongoing waveforms instead of adopting amplitude contents in a predefined fixed-length time window, since it may underestimate magnitude for large-magnitude events. In this paper, we propose a fast, robust and less-saturated approach to estimate earthquake magnitudes. The EEW system will initially give a lower-bound of the magnitude in a time window with a few seconds and then update magnitude with less saturation by extending the time window. Here we compared two kinds of time windows for measuring amplitudes. One is P-wave time window (PTW) after P-wave arrival; the other is whole-wave time window after P-wave arrival (WTW), which may include both P and S wave. One to ten second time windows for both PTW and WTW are considered to measure the peak ground displacement from the vertical component of the waveforms. Linear regression analysis are run at each time step (1- to 10-s time interval) to find the empirical relationships among peak ground displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 in Taiwan with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that considering WTW to estimate magnitudes has smaller standard deviation than PTW. The

  8. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  9. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    values for different β-values. For magnitude larger than 8.5, the turbidite data are consistent with all three TGR models. For smaller magnitudes, the TGR models predict a higher rate than the paleoseismic data show. The discrepancy can be attributed to uncertainties in the paleoseismic magnitudes, the potential incompleteness of the paleoseismic record for smaller events, or temporal variations of the seismicity. Nevertheless, our results show that for this zone, earthquake of m 8.8×0.2 are expected over a 500-year period, m 9.0×0.2 are expected over a 1000-year period, and m 9.3×0.2 are expected over a 10,000-year period.

  10. The nature of earthquake prediction

    USGS Publications Warehouse

    Lindh, A.G.

    1991-01-01

    Earthquake prediction is inherently statistical. Although some people continue to think of earthquake prediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear for at least a decade that this is an unrealistic and unreasonable definition. the reality is that earthquake prediction starts from the long-term forecasts of place and magnitude, with very approximate time constraints, and progresses, at least in principle, to a gradual narrowing of the time window as data and understanding permit. Primitive long-term forecasts are clearly possible at this time on a few well-characterized fault systems. Tightly focuses monitoring experiments aimed at short-term prediction are already underway in Parkfield, California, and in the Tokai region in Japan; only time will tell how much progress will be possible. 

  11. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, S.E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  12. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, Susan E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  13. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  14. Induced earthquake magnitudes are as large as (statistically) expected

    USGS Publications Warehouse

    Van Der Elst, Nicholas; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas; Hosseini, S. Mehran

    2016-01-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  15. The Earthquake Frequency-Magnitude Distribution Functional Shape

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2012-04-01

    Knowledge of the completeness magnitude Mc, magnitude above which all earthquakes are detected, is a prerequisite to most seismicity analyses. Although computation of Mc is done routinely, different techniques often result in different values. Since an incorrect estimate can lead to under-sampling or worse to an erroneous estimate of the parameters of the Gutenberg-Richter (G-R) law, a better assessment of the deviation from the G-R law and thus of the earthquake detectability is of paramount importance to correctly estimate Mc. This is especially true for refined mapping of seismicity parameters such as in earthquake forecast models. The capacity of a seismic network to detect small earthquakes can be evaluated by investigating the functional shape of the earthquake Frequency-Magnitude Distribution (FMD). The non-cumulative FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the number of events of magnitude m, exp(-βm) the G-R law and q(m) a probability function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature often observed in bulk FMDs. Recent results however show that this gradual curvature is potentially due to spatial heterogeneities in Mc, meaning that the functional shape of the elemental (local) FMD still has to be described. Based on preliminary observations, we propose an exponential detection function of the form q(m) = exp(κ(m-Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models (gradually curved and angular) are compared in Southern California and Nevada. We show that the angular shaped FMD model better describes the elemental FMD and that the sum of elemental FMDs with different Mc(x,y) leads to the gradually curved FMD at the regional scale. We show that the proposed model (1) provides more robust estimates of Mc, (2) better estimates local b-values, and (3) gives an insight into earthquake detectability properties by using seismicity as a proxy

  16. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  17. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  18. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  19. The parkfield, california, earthquake prediction experiment.

    PubMed

    Bakun, W H; Lindh, A G

    1985-08-16

    Five moderate (magnitude 6) earthquakes with similar features have occurred on the Parkfield section of the San Andreas fault in central California since 1857. The next moderate Parkfield earthquake is expected to occur before 1993. The Parkfield prediction experiment is designed to monitor the details of the final stages of the earthquake preparation process; observations and reports of seismicity and aseismic slip associated with the last moderate Parkfield earthquake in 1966 constitute much of the basis of the design of the experiment.

  20. Characteristic magnitude of subduction earthquake and upper plate stiffness

    NASA Astrophysics Data System (ADS)

    Sakaguchi, A.; Yamamoto, Y.; Hashimoto, Y.; Harris, R. N.; Vannucchi, P.; Petronotis, K. E.

    2013-12-01

    recurrence interval and event displacement varies with the stiffness of the system. We propose that an important factor influencing the characteristic magnitude of large subduction earthquakes and recurrence intervals is stiffness of the upper plate. This model predicts that event displacement, and thus magnitude, is smaller at Costa Rica than at Nankai because the upper plate is stiffer at Costa Rica. This hypothesis can be tested based on elastic parameters estimated from seismic data and physical properties of core samples obtained from deep drilling.

  1. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  2. Threshold magnitude for Ionospheric TEC response to earthquakes

    NASA Astrophysics Data System (ADS)

    Perevalova, N. P.; Sankov, V. A.; Astafyeva, E. I.; Zhupityaeva, A. S.

    2014-02-01

    We have analyzed ionospheric response to earthquakes with magnitudes of 4.1-8.8 which occurred under quiet geomagnetic conditions in different regions of the world (the Baikal region, Kuril Islands, Japan, Greece, Indonesia, China, New Zealand, Salvador, and Chile). This investigation relied on measurements of total electron content (TEC) variations made by ground-based dual-frequency GPS receivers. To perform the analysis, we selected earthquakes with permanent GPS stations installed close by. Data processing has revealed that after 4.1-6.3-magnitude earthquakes wave disturbances in TEC variations are undetectable. We have thoroughly analyzed publications over the period of 1965-2013 which reported on registration of wave TIDs after earthquakes. This analysis demonstrated that the magnitude of the earthquakes having a wave response in the ionosphere was no less than 6.5. Based on our results and on the data from other researchers, we can conclude that there is a threshold magnitude (near 6.5) below which there are no pronounced earthquake-induced wave TEC disturbances. The probability of detection of post-earthquake TIDs with a magnitude close to the threshold depends strongly on geophysical conditions. In addition, reliable identification of the source of such TIDs generally requires many GPS stations in an earthquake zone. At low magnitudes, seismic energy is likely to be insufficient to generate waves in the neutral atmosphere which are able to induce TEC disturbances observable at the level of background fluctuations.

  3. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  4. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  5. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  6. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  7. Source time function properties indicate a strain drop independent of earthquake depth and magnitude.

    PubMed

    Vallée, Martin

    2013-01-01

    The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes.

  8. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  9. Multiscale mapping of completeness magnitude of earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Vorobieva, Inessa; Narteau, Clement; Shebalin, Peter; Beauducel, François; Nercessian, Alexandre; Clouard, Valérie; Bouin, Marie-Paule

    2013-04-01

    We propose a multiscale method to map spatial variations in completeness magnitude Mc of earthquake catalogs. The Mc value may significantly vary in space due to the change of the seismic network density. Here we suggest a way to use only earthquake catalogs to separate small areas of higher network density (lower Mc) and larger areas of smaller network density (higher Mc). We reduce the analysis of the FMDs to the limited magnitude ranges, thus allowing deviation of the FMD from the log-linearity outside the range. We associate ranges of larger magnitudes with increasing areas for data selection based on constant in average number of completely recorded earthquakes. Then, for each point in space, we document the earthquake frequency-magnitude distribution at all length scales within the corresponding earthquake magnitude ranges. High resolution of the Mc-value is achieved through the determination of the smallest space-magnitude scale in which the Gutenberg-Richter law (i. e. an exponential decay) is verified. The multiscale procedure isolates the magnitude range that meets the best local seismicity and local record capacity. Using artificial catalogs and earthquake catalogs of the Lesser Antilles arc, this Mc mapping method is shown to be efficient in regions with mixed types of seismicity, a variable density of epicenters and various levels of registration.

  10. Spatial Seismicity Rates and Maximum Magnitudes for Background Earthquakes

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Frankel, Arthur D.; Zeng, Yuehua

    2008-01-01

    The background seismicity model is included to account for M 5.0 - 6.5 earthquakes on faults and for random M 5.0 ? 7.0 earthquakes that do not occur on faults included in the model (as in earlier models of Frankel et al., 1996, 2002 and Petersen et al., 1996). We include four different classes of earthquake sources in the California background seismicity model: (1) gridded (smoothed) seismicity, (2) regional background zones, (3) special fault zone models, and (4) shear zones (also referred to as C zones). The gridded (smoothed) seismicity model, the regional background zone model, and the special fault zones use a declustered earthquake catalog for calculation of earthquake rates. Earthquake rates in shear zones are estimated from the geodetically determined rate of deformation across an area of high strain rate. We use a truncated exponential (Gutenberg-Richter, 1944) magnitude-frequency distribution to account for earthquakes in the background models.

  11. Intermediate-term earthquake prediction.

    PubMed Central

    Keilis-Borok, V I

    1996-01-01

    An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems. Images Fig. 1 Fig. 2 Fig. 4 Fig. 5 PMID:11607660

  12. Intermediate-term earthquake prediction.

    PubMed

    Keilis-Borok, V I

    1996-04-30

    An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems.

  13. Do magnitudes of great subduction earthquakes depend on strength of mechanical coupling between the plates?

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2017-04-01

    suggest that the largest earthquakes should occur in subduction zones with neutral (most frequently) or moderately compressive deformation regimes of the upper plate. This is a consequence of the low dipping angles and low static friction coefficients in the subduction zones with largest earthquakes, rather than a reason for the largest earthquakes. Our models predict maximum earthquake magnitudes for the subduction zones of different geometries and these predictions are consistent with the observed magnitudes for all observed events and estimated historical events.

  14. On Earthquake Prediction in Japan

    PubMed Central

    UYEDA, Seiya

    2013-01-01

    Japan’s National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  15. On earthquake prediction in Japan.

    PubMed

    Uyeda, Seiya

    2013-01-01

    Japan's National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author's view, are mainly interested in securing funds for seismology - on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors.

  16. Correlating precursory declines in groundwater radon with earthquake magnitude.

    PubMed

    Kuo, T

    2014-01-01

    Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible. © 2013, National Ground Water Association.

  17. Rapid Earthquake Magnitude Estimation for Early Warning Applications

    NASA Astrophysics Data System (ADS)

    Goldberg, Dara; Bock, Yehuda; Melgar, Diego

    2017-04-01

    Earthquake magnitude is a concise metric that provides invaluable information about the destructive potential of a seismic event. Rapid estimation of magnitude for earthquake and tsunami early warning purposes requires reliance on near-field instrumentation. For large magnitude events, ground motions can exceed the dynamic range of near-field broadband seismic instrumentation (clipping). Strong motion accelerometers are designed with low gains to better capture strong shaking. Estimating earthquake magnitude rapidly from near-source strong-motion data requires integration of acceleration waveforms to displacement. However, integration amplifies small errors, creating unphysical drift that must be eliminated with a high pass filter. The loss of the long period information due to filtering is an impediment to magnitude estimation in real-time; the relation between ground motion measured with strong-motion instrumentation and magnitude saturates, leading to underestimation of earthquake magnitude. Using station displacements from Global Navigation Satellite System (GNSS) observations, we can supplement the high frequency information recorded by traditional seismic systems with long-period observations to better inform rapid response. Unlike seismic-only instrumentation, ground motions measured with GNSS scale with magnitude without saturation [Crowell et al., 2013; Melgar et al., 2015]. We refine the current magnitude scaling relations using peak ground displacement (PGD) by adding a large GNSS dataset of earthquakes in Japan. Because it does not suffer from saturation, GNSS alone has significant advantages over seismic-only instrumentation for rapid magnitude estimation of large events. The earthquake's magnitude can be estimated within 2-3 minutes of earthquake onset time [Melgar et al., 2013]. We demonstrate that seismogeodesy, the optimal combination of GNSS and seismic data at collocated stations, provides the added benefit of improving the sensitivity of

  18. The October 1992 Parkfield, California, earthquake prediction

    USGS Publications Warehouse

    Langbein, J.

    1992-01-01

    A magnitude 4.7 earthquake occurred near Parkfield, California, on October 20, 992, at 05:28 UTC (October 19 at 10:28 p.m. local or Pacific Daylight Time).This moderate shock, interpreted as the potential foreshock of a damaging earthquake on the San Andreas fault, triggered long-standing federal, state and local government plans to issue a public warning of an imminent magnitude 6 earthquake near Parkfield. Although the predicted earthquake did not take place, sophisticated suites of instruments deployed as part of the Parkfield Earthquake Prediction Experiment recorded valuable data associated with an unusual series of events. this article describes the geological aspects of these events, which occurred near Parkfield in October 1992. The accompnaying article, an edited version of a press conference b Richard Andrews, the Director of the California Office of Emergency Service (OES), describes governmental response to the prediction.   

  19. Predicting Strong Ground-Motion Seismograms for Magnitude 9 Cascadia Earthquakes Using 3D Simulations with High Stress Drop Sub-Events

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Wirth, E. A.; Stephenson, W. J.; Moschetti, M. P.; Ramirez-Guzman, L.

    2015-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for magnitude 9.0 earthquakes on the Cascadia subduction zone by combining synthetics from simulations with a 3D velocity model at low frequencies (≤ 1 Hz) with stochastic synthetics at high frequencies (≥ 1 Hz). We use a compound rupture model consisting of a set of M8 high stress drop sub-events superimposed on a background slip distribution of up to 20m that builds relatively slowly. The 3D simulations were conducted using a finite difference program and the finite element program Hercules. The high-frequency (≥ 1 Hz) energy in this rupture model is primarily generated in the portion of the rupture with the M8 sub-events. In our initial runs, we included four M7.9-8.2 sub-events similar to those that we used to successfully model the strong ground motions recorded from the 2010 M8.8 Maule, Chile earthquake. At periods of 2-10 s, the 3D synthetics exhibit substantial amplification (about a factor of 2) for sites in the Puget Lowland and even more amplification (up to a factor of 5) for sites in the Seattle and Tacoma sedimentary basins, compared to rock sites outside of the Puget Lowland. This regional and more localized basin amplification found from the simulations is supported by observations from local earthquakes. There are substantial variations in the simulated M9 time histories and response spectra caused by differences in the hypocenter location, slip distribution, down-dip extent of rupture, coherence of the rupture front, and location of sub-events. We examined the sensitivity of the 3D synthetics to the velocity model of the Seattle basin. We found significant differences in S-wave focusing and surface wave conversions between a 3D model of the basin from a spatially-smoothed tomographic inversion of Rayleigh-wave phase velocities and a model that has an abrupt southern edge of the Seattle basin, as observed in seismic reflection profiles.

  20. Earthquake Prediction is Coming

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)

  1. Earthquake Prediction is Coming

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)

  2. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  3. Calibration of magnitude scales for earthquakes of the Mediterranean

    NASA Astrophysics Data System (ADS)

    Gardini, Domenico; di Donato, Maria; Boschi, Enzo

    In order to provide the tools for uniform size determination for Mediterranean earthquakes over the last 50-year period of instrumental seismology, we have regressed the magnitude determinations for 220 earthquakes of the European-Mediterranean region over the 1977-1991 period, reported by three international centres, 11 national and regional networks and 101 individual stations and observatories, using seismic moments from the Harvard CMTs. We calibrate M(M0) regression curves for the magnitude scales commonly used for Mediterranean earthquakes (ML, MWA, mb, MS, MLH, MLV, MD, M); we also calibrate static corrections or specific regressions for individual observatories and we verify the reliability of the reports of different organizations and observatories. Our analysis shows that the teleseismic magnitudes (mb, MS) computed by international centers (ISC, NEIC) provide good measures of earthquake size, with low standard deviations (0.17-0.23), allowing one to regress stable regional calibrations with respect to the seismic moment and to correct systematic biases such as the hypocentral depth for MS and the radiation pattern for mb; while mb is commonly reputed to be an inadequate measure of earthquake size, we find that the ISC mb is still today the most precise measure to use to regress MW and M0 for earthquakes of the European-Mediterranean region; few individual observatories report teleseismic magnitudes requiring specific dynamic calibrations (BJI, MOS). Regional surface-wave magnitudes (MLV, MLH) reported in Eastern Europe generally provide reliable measures of earthquake size, with standard deviations often in the 0.25-0.35 range; the introduction of a small (±0.1-0.2) static station correction is sometimes required. While the Richter magnitude ML is the measure of earthquake size most commonly reported in the press whenever an earthquake strikes, we find that ML has not been computed in the European-Mediterranean in the last 15 years; the reported local

  4. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  5. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  6. Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2013-12-01

    earthquakes. The Nuclear Regulation Authority, established in 2012, makes independent decisions based on the latest scientific knowledge. They assigned maximum credible earthquake magnitude of 9.6 for Nankai an Ryukyu troughs, 9.6 for Kuirl-Japan trench, and 9.2 for Izu-Bonin trench.

  7. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  8. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  9. Earthquake Early Warning with Seismogeodesy: Detection, Location, and Magnitude Estimation

    NASA Astrophysics Data System (ADS)

    Goldberg, D.; Bock, Y.; Melgar, D.

    2016-12-01

    Earthquake early warning is critical to reducing injuries and casualties in case of a large magnitude earthquake. The system must rely on near-source data to minimize the time between event onset and issuance of a warning. Early warning systems typically use seismic instruments (seismometers and accelerometers), but these instruments experience difficulty maintaining reliable data in the near-source region and undergo magnitude saturation for large events. Global Navigation Satellite System (GNSS) instruments capture the long period motions and have been shown to produce robust estimates of the true size of the earthquake source. However, GNSS is often overlooked in this context in part because it is not precise enough to record the first seismic wave arrivals (P-wave detection), an important consideration for issuing an early warning. GNSS instruments are becoming integrated into early warning, but are not yet fully exploited. Our approach involves the combination of direct measurements from collocated GNSS and accelerometer stations to estimate broadband coseismic displacement and velocity waveforms [Bock et al., 2011], a method known as seismogeodesy. We present the prototype seismogeodetic early warning system developed at Scripps and demonstrate that the seismogeodetic dataset can be used for P-wave detection, hypocenter location, and shaking onset determination. We discuss uncertainties in each of these estimates and include discussion of the sensitivity of our estimates as a function of the azimuthal distribution of monitoring stations. The seismogeodetic combination has previously been shown to be immune to magnitude saturation [Crowell et al., 2013; Melgar et al., 2015]. Rapid magnitude estimation is an important product in earthquake early warning, and is the critical metric in current tsunami hazard warnings. Using the seismogeodetic approach, we refine earthquake magnitude scaling using P-wave amplitudes (Pd) and peak ground displacements (PGD) for a

  10. Prediction of earthquake response spectra

    USGS Publications Warehouse

    Joyner, W.B.; Boore, David M.

    1982-01-01

    We have developed empirical equations for predicting earthquake response spectra in terms of magnitude, distance, and site conditions, using a two-stage regression method similar to the one we used previously for peak horizontal acceleration and velocity. We analyzed horizontal pseudo-velocity response at 5 percent damping for 64 records of 12 shallow earthquakes in Western North America, including the recent Coyote Lake and Imperial Valley, California, earthquakes. We developed predictive equations for 12 different periods between 0.1 and 4.0 s, both for the larger of two horizontal components and for the random horizontal component. The resulting spectra show amplification at soil sites compared to rock sites for periods greater than or equal to 0.3 s, with maximum amplification exceeding a factor of 2 at 2.0 s. For periods less than 0.3 s there is slight deamplification at the soil sites. These results are generally consistent with those of several earlier studies. A particularly significant aspect of the predicted spectra is the change of shape with magnitude (confirming earlier results by McGuire and by Irifunac and Anderson). This result indicates that the conventional practice of scaling a constant spectral shape by peak acceleration will not give accurate answers. The Newmark and Hall method of spectral scaling, using both peak acceleration and peak velocity, largely avoids this error. Comparison of our spectra with the Nuclear Regulatory Commission's Regulatory Guide 1.60 spectrum anchored at the same value at 0.1 s shows that the Regulatory Guide 1.60 spectrum is exceeded at soil sites for a magnitude of 7.5 at all distances for periods greater than about 0.5 s. Comparison of our spectra for soil sites with the corresponding ATC-3 curve of lateral design force coefficient for the highest seismic zone indicates that the ATC-3 curve is exceeded within about 7 km of a magnitude 6.5 earthquake and within about 15 km of a magnitude 7.5 event. The amount by

  11. Automated Determination of Magnitude and Source Extent of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun

    2017-04-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  12. Moment Magnitude ( M W) and Local Magnitude ( M L) Relationship for Earthquakes in Northeast India

    NASA Astrophysics Data System (ADS)

    Baruah, Santanu; Baruah, Saurabh; Bora, P. K.; Duarah, R.; Kalita, Aditya; Biswas, Rajib; Gogoi, N.; Kayal, J. R.

    2012-11-01

    An attempt has been made to examine an empirical relationship between moment magnitude ( M W) and local magnitude ( M L) for the earthquakes in the northeast Indian region. Some 364 earthquakes that were recorded during 1950-2009 are used in this study. Focal mechanism solutions of these earthquakes include 189 Harvard-CMT solutions ( M W ≥ 4.0) for the period 1976-2009, 61 published solutions and 114 solutions obtained for the local earthquakes (2.0 ≤ M L ≤ 5.0) recorded by a 27-station permanent broadband network during 2001-2009 in the region. The M W- M L relationships in seven selected zones of the region are determined by linear regression analysis. A significant variation in the M W- M L relationship and its zone specific dependence are reported here. It is found that M W is equivalent to M L with an average uncertainty of about 0.13 magnitude units. A single relationship is, however, not adequate to scale the entire northeast Indian region because of heterogeneous geologic and geotectonic environments where earthquakes occur due to collisions, subduction and complex intra-plate tectonics.

  13. The effects of earthquake measurement concepts and magnitude anchoring on individuals' perceptions of earthquake risk

    USGS Publications Warehouse

    Celsi, R.; Wolfinbarger, M.; Wald, D.

    2005-01-01

    The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.

  14. Localization of intermediate-term earthquake prediction

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Keilis-Borok, V. I.; Smith, S. W.

    1990-11-01

    Relative seismic quiescence within a region which has already been diagnosed as having entered a "Time of Increased Probability" (TIP) for the occurrence of a strong earthquake can be used to refine the locality in which the earthquake may be expected to occur. A simple algorithm with parameters fitted from the data in Northern California preceding the 1980 magnitude 7.0 earthquake offshore from Eureka depicts relative quiescence within the region of a TTP. The procedure was tested, without readaptation of parameters, on 17 other strong earthquake occurrences in North America, Japan, and Eurasia, most of which were in regions for which a TIP had been previously diagnosed. The localization algorithm successfully outlined a region within which the subsequent earthquake occurred for 16 of these 17 strong earthquakes. The area of prediction in each case was reduced significantly, ranging between 7% and 25% of the total area covered by the TIP.

  15. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  16. Estimating earthquake location and magnitude from seismic intensity data

    USGS Publications Warehouse

    Bakun, W.H.; Wentworth, C.M.

    1997-01-01

    Analysis of Modified Mercalli intensity (MMI) observations for a training set of 22 California earthquakes suggests a strategy for bounding the epicentral region and moment magnitude M from MMI observations only. We define an intensity magnitude MI that is calibrated to be equal in the mean to M. MI = mean (Mi), where Mi = (MMIi + 3.29 + 0.0206 * ??i)/1.68 and ??i is the epicentral distance (km) of observation MMIi. The epicentral region is bounded by contours of rms [MI] = rms (MI - Mi) - rms0 (MI - Mi-), where rms is the root mean square, rms0 (MI - Mi) is the minimum rms over a grid of assumed epicenters, and empirical site corrections and a distance weighting function are used. Empirical contour values for bounding the epicenter location and empirical bounds for M estimated from MI appropriate for different levels of confidence and different quantities of intensity observations are tabulated. The epicentral region bounds and MI obtained for an independent test set of western California earthquakes are consistent with the instrumental epicenters and moment magnitudes of these earthquakes. The analysis strategy is particularly appropriate for the evaluation of pre-1900 earthquakes for which the only available data are a sparse set of intensity observations.

  17. Radiocarbon test of earthquake magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Atwater, B.F.; Stuiver, M.; Yamaguchi, D.K.

    1991-01-01

    THE Cascadia subduction zone, which extends along the northern Pacific coast of North America, might produce earthquakes of magnitude 8 or 9 ('great' earthquakes) even though it has not done so during the past 200 years of European observation 1-7. Much of the evidence for past Cascadia earthquakes comes from former meadows and forests that became tidal mudflats owing to abrupt tectonic subsidence in the past 5,000 years2,3,6,7. If due to a great earthquake, such subsidence should have extended along more than 100 km of the coast2. Here we investigate the extent of coastal subsidence that might have been caused by a single earthquake, through high-precision radiocarbon dating of coastal trees that abruptly subsided into the intertidal zone. The ages leave the great-earthquake hypothesis intact by limiting to a few decades the discordance, if any, in the most recent subsidence of two areas 55 km apart along the Washington coast. This subsidence probably occurred about 300 years ago.

  18. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  19. A Strategy to Rapidly Determine the Magnitude of Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Menke, William; Levin, Vadim

    2005-05-01

    In the initial hours following the origin of the Sumatra-Andaman Islands earthquake at 0058:53 GMT on 26 December 2004, the event was widely reported as having a magnitude of about 8. Thus, its potential for generating a damaging teletsunami (ocean-crossing tsunami) was considered minimal. The event's size later was shown to be approximately 10 times larger, but only after more than four and a half hours had passed, when a moment estimate based on 2.5 hours of data became available from Harvard University's Centroid-Moment Tensor (CMT) Project (M. Nettles and G. Ekstrom, Quick CMT of the 2004 Sumatra-Andaman Islands earthquake, Seismoware FID: BR345, e-mailed announcement, 26 December 2004). This estimate placed its magnitude at Mw ~ 9.0, in the range capable of generating a damaging teletsunami. Actually, the earthquake had caused a teletsunami, one that by that time had already killed more than a hundred thousand people. The magnitude estimate has been subsequently revised to at least 9.3 (Stein and Okal, http://www.earth.northwestern.edu/people/~seth/research/sumatra.html), with the exact magnitude of the event likely to be a subject of further research in the coming years.

  20. Nonlinear site response in medium magnitude earthquakes near Parkfield, California

    USGS Publications Warehouse

    Rubinstein, Justin L.

    2011-01-01

    Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).

  1. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  2. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  3. Space geodesy and earthquake prediction

    NASA Technical Reports Server (NTRS)

    Bilham, Roger

    1987-01-01

    Earthquake prediction is discussed from the point of view of a new development in geodesy known as space geodesy, which involves the use of extraterrestrial sources or reflectors to measure earth-based distances. Space geodesy is explained, and its relation to terrestrial geodesy is examined. The characteristics of earthquakes are reviewed, and the ways that they can be exploited by space geodesy to predict earthquakes is demonstrated.

  4. Space geodesy and earthquake prediction

    NASA Technical Reports Server (NTRS)

    Bilham, Roger

    1987-01-01

    Earthquake prediction is discussed from the point of view of a new development in geodesy known as space geodesy, which involves the use of extraterrestrial sources or reflectors to measure earth-based distances. Space geodesy is explained, and its relation to terrestrial geodesy is examined. The characteristics of earthquakes are reviewed, and the ways that they can be exploited by space geodesy to predict earthquakes is demonstrated.

  5. Estimating The Magnitude Ms of Historical Earthquakes From Macroseismic Observations

    NASA Astrophysics Data System (ADS)

    Kaiser, D.; Gutdeutsch, R.; Jentzsch, G.

    Magnitudes of earthquakes earlier than 1900 are derived from macroseismic observa- tions, i.e. the maximum intensity I0, or isoseismal radii RI of different intensities I and the focal depth h. The purpose of our study is to compare the importance of I0 and RI as input parameters for the estimation of the surface wave magnitude MS of his- torical earthquakes and to derive appropriate empirical relationships. We use carefully selected instrumental parts (since 1900) of 2 earthquake catalogues: Kárník 1996 (Eu- rope and the Mediterranean) and Shebalin et al. 1998 (Central and Eastern Europe). In order to establish relationships we use the orthogonal regression because we presume that all parameters are in error and because it has the advantage to provide reversible regression equations. Estimation of MS from I0 and h. As correlation analysis of Kárník's catalogue shows no significant influence of h on the relation between MS and I0 we obtain MS = 0.55I0+1.26, with derived equivalent standard error MS = +/-0.44 and I0 = +/-0.86. The practical use of this relationship is limited due to rather large errors. In addition we observe systematic regional variations which need further investigation. We were able to apply much more stringent selection criteria to the Shebalin catalogue and found a substantial improvement of the correlation when considering the influence of h [km], in contrast to Kárník's catalogue. We obtain MS = 0.65I0 + 1.90log(h) - 1.62 with error MS = +/-0.21. We recommend this equation for application. Estimation of MS from average isoseismal radii RI. In order to establish a relation- ship between MS and RI we apply a theoretically based model which takes into account both exponential decay and geometrical spreading factor. We find MS = 0.695I + 2.14 log (RI) + 0.00329RI - 1.93 with MS = +/-0.32. Here I is the macroseismic intensity (I = 3 ... 9) of the isoseismal RI [km]. With this equation it is possible to reliably estimate MS and we recommend

  6. Exaggerated Claims About Earthquake Predictions

    NASA Astrophysics Data System (ADS)

    Kafka, Alan L.; Ebel, John E.

    2007-01-01

    The perennial promise of successful earthquake prediction captures the imagination of a public hungry for certainty in an uncertain world. Yet, given the lack of any reliable method of predicting earthquakes [e.g., Geller, 1997; Kagan and Jackson, 1996; Evans, 1997], seismologists regularly have to explain news stories of a supposedly successful earthquake prediction when it is far from clear just how successful that prediction actually was. When journalists and public relations offices report the latest `great discovery' regarding the prediction of earthquakes, seismologists are left with the much less glamorous task of explaining to the public the gap between the claimed success and the sober reality that there is no scientifically proven method of predicting earthquakes.

  7. Great earthquakes of variable magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Nelson, A.R.; Kelsey, H.M.; Witter, R.C.

    2006-01-01

    Comparison of histories of great earthquakes and accompanying tsunamis at eight coastal sites suggests plate-boundary ruptures of varying length, implying great earthquakes of variable magnitude at the Cascadia subduction zone. Inference of rupture length relies on degree of overlap on radiocarbon age ranges for earthquakes and tsunamis, and relative amounts of coseismic subsidence and heights of tsunamis. Written records of a tsunami in Japan provide the most conclusive evidence for rupture of much of the plate boundary during the earthquake of 26 January 1700. Cascadia stratigraphic evidence dating from about 1600??cal yr B.P., similar to that for the 1700 earthquake, implies a similarly long rupture with substantial subsidence and a high tsunami. Correlations are consistent with other long ruptures about 1350??cal yr B.P., 2500??cal yr B.P., 3400??cal yr B.P., 3800??cal yr B.P., 4400??cal yr B.P., and 4900??cal yr B.P. A rupture about 700-1100??cal yr B.P. was limited to the northern and central parts of the subduction zone, and a northern rupture about 2900??cal yr B.P. may have been similarly limited. Times of probable short ruptures in southern Cascadia include about 1100??cal yr B.P., 1700??cal yr B.P., 3200??cal yr B.P., 4200??cal yr B.P., 4600??cal yr B.P., and 4700??cal yr B.P. Rupture patterns suggest that the plate boundary in northern Cascadia usually breaks in long ruptures during the greatest earthquakes. Ruptures in southernmost Cascadia vary in length and recurrence intervals more than ruptures in northern Cascadia.

  8. Earthquake prediction; fact and fallacy

    USGS Publications Warehouse

    Hunter, R.N.

    1976-01-01

    Earthquake prediction is a young and growing area in the field of seismology. Only a few years ago, experts in seismology were declaring flatly that it was impossible. Now, some successes have been achieved and more are expected. Within a few years, earthquakes may be predicted as routinely as the weather, and possibly with greater accuracy. 

  9. Can We Predict Earthquakes?

    ScienceCinema

    Johnson, Paul

    2016-09-09

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  10. Can We Predict Earthquakes?

    SciTech Connect

    Johnson, Paul

    2016-08-31

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  11. Radon Precursory Signals for Some Earthquakes of Magnitude > 5 Occurred in N-W Himalaya: An Overview

    NASA Astrophysics Data System (ADS)

    Walia, Vivek; Virk, Hardev Singh; Bajwa, Bikramjit Singh

    2006-04-01

    The N-W Himalaya was rocked by a few major and many minor earthquakes. Two major earthquakes in Garhwal Himalaya: Uttarkashi earthquake of magnitude Ms= 7.0 (mb = 6.6) on October 20, 1991 in Bhagirthi valley and Chamoli earthquake of Ms= 6.5 (mb = 6.8) on March 29, 1999 in the Alaknanda valley and one in Himachal Himalaya: Chamba earthquake of magnitude 5.1 on March 24, 1995 in Chamba region, were recorded during the last decade and correlated with radon anomalies. The helium anomaly for Chamoli earthquake was also recorded and the Helium/Radon ratio model was tested on it. The precursory nature of radon and helium anomalies is a strong indicator in favor of geochemical precursors for earthquake prediction and a preliminary test for the Helium/Radon ratio model.

  12. The earthquake prediction experiment at Parkfield, California

    USGS Publications Warehouse

    Roeloffs, E.; Langbein, J.

    1994-01-01

    Since 1985, a focused earthquake prediction experiment has been in progress along the San Andreas fault near the town of Parkfield in central California. Parkfield has experienced six moderate earthquakes since 1857 at average intervals of 22 years, the most recent a magnitude 6 event in 1966. The probability of another moderate earthquake soon appears high, but studies assigning it a 95% chance of occurring before 1993 now appear to have been oversimplified. The identification of a Parkfield fault "segment" was initially based on geometric features in the surface trace of the San Andreas fault, but more recent microearthquake studies have demonstrated that those features do not extend to seismogenic depths. On the other hand, geodetic measurements are consistent with the existence of a "locked" patch on the fault beneath Parkfield that has presently accumulated a slip deficit equal to the slip in the 1966 earthquake. A magnitude 4.7 earthquake in October 1992 brought the Parkfield experiment to its highest level of alert, with a 72-hour public warning that there was a 37% chance of a magnitude 6 event. However, this warning proved to be a false alarm. Most data collected at Parkfield indicate that strain is accumulating at a constant rate on this part of the San Andreas fault, but some interesting departures from this behavior have been recorded. Here we outline the scientific arguments bearing on when the next Parkfield earthquake is likely to occur and summarize geophysical observations to date.

  13. Does low magnitude earthquake ground shaking cause landslides?

    NASA Astrophysics Data System (ADS)

    Brain, Matthew; Rosser, Nick; Vann Jones, Emma; Tunstall, Neil

    2015-04-01

    Estimating the magnitude of coseismic landslide strain accumulation at both local and regional scales is a key goal in understanding earthquake-triggered landslide distributions and landscape evolution, and in undertaking seismic risk assessment. Research in this field has primarily been carried out using the 'Newmark sliding block method' to model landslide behaviour; downslope movement of the landslide mass occurs when seismic ground accelerations are sufficient to overcome shear resistance at the landslide shear surface. The Newmark method has the advantage of simplicity, requiring only limited information on material strength properties, landslide geometry and coseismic ground motion. However, the underlying conceptual model assumes that shear strength characteristics (friction angle and cohesion) calculated using conventional strain-controlled monotonic shear tests are valid under dynamic conditions, and that values describing shear strength do not change as landslide shear strain accumulates. Recent experimental work has begun to question these assumptions, highlighting, for example, the importance of shear strain rate and changes in shear strength properties following seismic loading. However, such studies typically focus on a single earthquake event that is of sufficient magnitude to cause permanent strain accumulation; by doing so, they do not consider the potential effects that multiple low-magnitude ground shaking events can have on material strength. Since such events are more common in nature relative to high-magnitude shaking events, it is important to constrain their geomorphic effectiveness. Using an experimental laboratory approach, we present results that address this key question. We used a bespoke geotechnical testing apparatus, the Dynamic Back-Pressured Shear Box (DynBPS), that uniquely permits more realistic simulation of earthquake ground-shaking conditions within a hillslope. We tested both cohesive and granular materials, both of which

  14. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  15. Predicting Predictable: Accuracy and Reliability of Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2014-12-01

    Earthquake forecast/prediction is an uncertain profession. The famous Gutenberg-Richter relationship limits magnitude range of prediction to about one unit. Otherwise, the statistics of outcomes would be related to the smallest earthquakes and may be misleading when attributed to the largest earthquakes. Moreover, the intrinsic uncertainty of earthquake sizing allows self-deceptive picking of justification "just from below" the targeted magnitude range. This might be important encouraging evidence but, by no means, can be a "helpful" additive to statistics of a rigid testing that determines reliability and efficiency of a farecast/prediction method. Usually, earthquake prediction is classified in respect to expectation time while overlooking term-less identification of earthquake prone areas, as well as spatial accuracy. The forecasts are often made for a "cell" or "seismic region" whose area is not linked to the size of target earthquakes. This might be another source for making a wrong choice in parameterization of an forecast/prediction method and, eventually, for unsatisfactory performance in a real-time application. Summing up, prediction of time and location of an earthquake of a certain magnitude range can be classified into categories listed in the Table below - Classification of earthquake prediction accuracy Temporal, in years Spatial, in source zone size (L) Long-term 10 Long-range Up to 100 Intermediate-term 1 Middle-range 5-10 Short-term 0.01-0.1 Narrow-range 2-3 Immediate 0.001 Exact 1 Note that a wide variety of possible combinations that exist is much larger than usually considered "short-term exact" one. In principle, such an accurate statement about anticipated seismic extreme might be futile due to the complexities of the Earth's lithosphere, its blocks-and-faults structure, and evidently nonlinear dynamics of the seismic process. The observed scaling of source size and preparation zone with earthquake magnitude implies exponential scales for

  16. Radon in earthquake prediction research.

    PubMed

    Friedmann, H

    2012-04-01

    The observation of anomalies in the radon concentration in soil gas and ground water before earthquakes initiated systematic investigations on earthquake precursor phenomena. The question what is needed for a meaningful earthquake prediction as well as what types of precursory effects can be expected is shortly discussed. The basic ideas of the dilatancy theory are presented which in principle can explain the occurrence of earthquake forerunners. The reasons for radon anomalies in soil gas and in ground water are clarified and a possible classification of radon anomalies is given.

  17. Functional shape of the earthquake frequency-magnitude distribution and completeness magnitude

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2012-08-01

    We investigated the functional shape of the earthquake frequency-magnitude distribution (FMD) to identify its dependence on the completeness magnitude Mc. The FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the event number, m the magnitude, exp(-βm) the Gutenberg-Richter law and q(m) a detection function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature of bulk FMDs. Recent results however suggest that this gradual curvature is due to Mc heterogeneities, meaning that the functional shape of the elemental FMD has yet to be described. We propose a detection function of the form q(m) = exp(κ(m - Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models are compared in earthquake catalogs from Southern California and Nevada and in synthetic catalogs. We show that the angular FMD model better describes the elemental FMD and that the sum of elemental angular FMDs leads to the gradually curved bulk FMD. We propose an FMD shape ontology consisting of 5 categories depending on the Mc spatial distribution, from Mc constant to Mc highly heterogeneous: (I) Angular FMD, (II) Intermediary FMD, (III) Intermediary FMD with multiple maxima, (IV) Gradually curved FMD and (V) Gradually curved FMD with multiple maxima. We also demonstrate that the gradually curved FMD model overestimates Mc. This study provides new insights into earthquake detectability properties by using seismicity as a proxy and the means to accurately estimate Mc in any given volume.

  18. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods

  19. Sociological aspects of earthquake prediction

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Henry Spall talked recently with Denis Mileti who is in the Department of Sociology, Colorado State University, Fort Collins, Colo. Dr. Mileti is a sociologst involved with research programs that study the socioeconomic impact of earthquake prediction

  20. Magnitude and location of historical earthquakes in Japan and implications for the 1855 Ansei Edo earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    2005-01-01

    Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.

  1. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes

  2. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  3. Multiple Spectral Ratio Analyses Reveal Earthquake Source Spectra of Small Earthquakes and Moment Magnitudes of Microearthquakes

    NASA Astrophysics Data System (ADS)

    Uchide, T.; Imanishi, K.

    2016-12-01

    Spectral studies for macroscopic earthquake source parameters are helpful for characterizing earthquake rupture process and hence understanding earthquake source physics and fault properties. Those studies require us mute wave propagation path and site effects in spectra of seismograms to accentuate source effect. We have recently developed the multiple spectral ratio method [Uchide and Imanishi, BSSA, 2016] employing many empirical Green's function (EGF) events to reduce errors from the choice of EGF events. This method helps us estimate source spectra more accurately as well as moment ratios among reference and EGF events, which are useful to constrain the seismic moment of microearthquakes. First, we focus on earthquake source spectra. The source spectra have generally been thought to obey the omega-square model with single corner-frequency. However recent studies imply the existence of another corner frequency for some earthquakes. We analyzed small shallow inland earthquakes (3.5 < Mw < 4.5; depth < 20 km) in whole Japan and found both single- and double-corner-frequency source spectra not only in Fukushima Hamadori and northern Ibaraki prefecture area as reported in Uchide and Imanishi [2016] but also the other regions such as Wakayama and Kumamoto areas. Therefore, a model for earthquake source spectra to include all these observations is needed. Next, we focus on seismic moments of microearthquakes in Japan inferred from those of small earthquakes from NIED F-net moment tensor catalog and moment ratios from the multiple spectral ratio analyses. For 20000 microearthquakes in Fukushima Hamadori and northern Ibaraki prefecture area, we found that the JMA magnitudes (Mj) based on displacement or velocity amplitude are systematically below Mw. The slope of the Mj-Mw relation is 0.5 for Mj < 3 and 1 for Mj > 5. We propose a fitting curve for the obtained relationship as Mw = (1/2)Mj + (1/2)(Mjγ + Mcorγ)1/γ+ c, where Mcor is a corner magnitude, γ determines

  4. Geochemical challenge to earthquake prediction.

    PubMed

    Wakita, H

    1996-04-30

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented.

  5. Geochemical challenge to earthquake prediction.

    PubMed Central

    Wakita, H

    1996-01-01

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665

  6. Short-Term Foreshocks and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Papadopoulos, G. A.; Minadakis, G.; Orfanogiannaki, K.

    2016-12-01

    Foreshock recognition before main shocks depends on various factors, e.g. geophysical, catalogue completeness, foreshock definition, spatiotemporal windows. Foreshocks move towards the main shock epicenter, their number increases with the inverse of time, their b-value drops. However, only in very few single foreshock sequences these 3-D patterns were recognized at the same time, e.g. before the 2009 L' Aquila (Italy) earthquake (Mw6.3) and the 2010, 2014 and 2015 major earthquakes (Mw8+) that ruptured at the subduction zone of Chile. For the first time we found statistically significant 3-D foreshock patterns before small-to-moderate earthquakes. We present two good examples of earthquakes occurring on 4 March 2012 (Mw5.2) and 3 July 2013 (Mw4.8) in Athos and Polyphyto, both in North Greece. The great similarity with the patterns found before strong and major earthquakes indicates that the foreshock process is scale invariant in a wide magnitude range. It is likely that the process is independent of the faulting type at least for dip-slip faulting. There is also a trend of the main shock magnitude to scale with the foreshock area. These findings imply that foreshock activity is likely governed by pattern universality which may also reflect universality in the deformation process thus opening new ways for the foreshock utilization in the prediction of the main shock.

  7. The Magnitude Distribution of Earthquakes Near Southern California Faults

    DTIC Science & Technology

    2011-12-16

    Lindh , 1985; Jackson and Kagan, 2006]. We do not consider time dependence in this study, but focus instead on the magnitude distribution for this fault...90032-7. Bakun, W. H., and A. G. Lindh (1985), The Parkfield, California, earth- quake prediction experiment, Science, 229(4714), 619–624, doi:10.1126

  8. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  9. Using an extended historical record to assess the temporal behavior of high magnitude earthquakes

    NASA Astrophysics Data System (ADS)

    Bellone, E.; Muir-Wood, R.

    2012-04-01

    Oscillations in the number of worldwide high magnitude earthquakes since 1900 have trigger the question of whether the underlying activity rate can be considered constant. Between 1950 and 1965 there were seven earthquakes of magnitude 8.6 or higher in the space of 15 years followed by a period of 39 years in which there were no earthquakes at or above this size. Including the Mw9.2 2004 Indian Ocean earthquake there have now been four earthquakes at or above this threshold (in seven years) including the 2010 Mw8.8 Maule earthquake in Chile and the Mw9 Tohoku earthquake in Japan. Previous studies, using the earthquake catalogue from 1900 onwards, came to different conclusions on whether these data support a change in the underlying worldwide rate of large magnitude earthquakes. To assist in addressing this issue, we have set out to explore an extended catalogue of extreme magnitude earthquakes spanning at least 300 years. The presentation will report the results of statistical analyses to determine the strength of evidence for temporal clustering of extreme global earthquakes. If we are currently in a period of elevated activity for the largest magnitude earthquakes, what are the implications for assessing subduction zone earthquake risk - as along the Cascadia coastline of Oregon, Washington State and Vancouver Island, or along the coasts of northern Chile and Peru?

  10. Local magnitude determinations for intermountain seismic belt earthquakes from broadband digital data

    USGS Publications Warehouse

    Pechmann, J.C.; Nava, S.J.; Terra, F.M.; Bernier, J.C.

    2007-01-01

    The University of Utah Seismograph Stations (UUSS) earthquake catalogs for the Utah and Yellowstone National Park regions contain two types of size measurements: local magnitude (ML) and coda magnitude (MC), which is calibrated against ML. From 1962 through 1993, UUSS calculated ML values for southern and central Intermountain Seismic Belt earthquakes using maximum peak-to-peak (p-p) amplitudes on paper records from one to five Wood-Anderson (W-A) seismographs in Utah. For ML determinations of earthquakes since 1994, UUSS has utilized synthetic W-A seismograms from U.S. National Seismic Network and UUSS broadband digital telemetry stations in the region, which numbered 23 by the end of our study period on 30 June 2002. This change has greatly increased the percentage of earthquakes for which ML can be determined. It is now possible to determine ML for all M ???3 earthquakes in the Utah and Yellowstone regions and earthquakes as small as M <1 in some areas. To maintain continuity in the magnitudes in the UUSS earthquake catalogs, we determined empirical ML station corrections that minimize differences between MLs calculated from paper and synthetic W-A records. Application of these station corrections, in combination with distance corrections from Richter (1958) which have been in use at UUSS since 1962, produces ML values that do not show any significant distance dependence. ML determinations for the Utah and Yellowstone regions for 1981-2002 using our station corrections and Richter's distance corrections have provided a reliable data set for recalibrating the MC scales for these regions. Our revised ML values are consistent with available moment magnitude determinations for Intermountain Seismic Belt earthquakes. To facilitate automatic ML measurements, we analyzed the distribution of the times of maximum p-p amplitudes in synthetic W-A records. A 30-sec time window for maximum amplitudes, beginning 5 sec before the predicted Sg time, encompasses 95% of the

  11. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  12. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-10-27

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude.

  13. Earthquakes clustering based on the magnitude and the depths in Molluca Province

    SciTech Connect

    Wattimanela, H. J.; Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T.

    2015-12-22

    In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province.

  14. Statistical short-term earthquake prediction.

    PubMed

    Kagan, Y Y; Knopoff, L

    1987-06-19

    A statistical procedure, derived from a theoretical model of fracture growth, is used to identify a foreshock sequence while it is in progress. As a predictor, the procedure reduces the average uncertainty in the rate of occurrence for a future strong earthquake by a factor of more than 1000 when compared with the Poisson rate of occurrence. About one-third of all main shocks with local magnitude greater than or equal to 4.0 in central California can be predicted in this way, starting from a 7-year database that has a lower magnitude cut off of 1.5. The time scale of such predictions is of the order of a few hours to a few days for foreshocks in the magnitude range from 2.0 to 5.0.

  15. Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes

    NASA Astrophysics Data System (ADS)

    Itaba, S.

    2016-12-01

    The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.

  16. Modeling Time Dependent Earthquake Magnitude Distributions Associated with Injection-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Maurer, J.; Segall, P.

    2015-12-01

    Understanding and predicting earthquake magnitudes from injection-induced seismicity is critically important for estimating hazard due to injection operations. A particular problem has been that the largest event often occurs post shut-in. A rigorous analysis would require modeling all stages of earthquake nucleation, propagation, and arrest, and not just initiation. We present a simple conceptual model for predicting the distribution of earthquake magnitudes during and following injection, building on the analysis of Segall & Lu (2015). The analysis requires several assumptions: (1) the distribution of source dimensions follows a Gutenberg-Richter distribution; (2) in environments where the background ratio of shear to effective normal stress is low, the size of induced events is limited by the volume perturbed by injection (e.g., Shapiro et al., 2013; McGarr, 2014), and (3) the perturbed volume can be approximated by diffusion in a homogeneous medium. Evidence for the second assumption comes from numerical studies that indicate the background ratio of shear to normal stress controls how far an earthquake rupture, once initiated, can grow (Dunham et al., 2011; Schmitt et al., submitted). We derive analytical expressions that give the rate of events of a given magnitude as the product of three terms: the time-dependent rate of nucleations, the probability of nucleating on a source of given size (from the Gutenberg-Richter distribution), and a time-dependent geometrical factor. We verify our results using simulations and demonstrate characteristics observed in real induced sequences, such as time-dependent b-values and the occurrence of the largest event post injection. We compare results to Segall & Lu (2015) as well as example datasets. Future work includes using 2D numerical simulations to test our results and assumptions; in particular, investigating how background shear stress and fault roughness control rupture extent.

  17. The politics of earthquake prediction

    SciTech Connect

    Olson, R.S.

    1989-01-01

    This book gives an account of the politics, scientific and public, generated from the Brady-Spence prediction of a massive earthquake to take place within several years in central Peru. Though the disaster did not happen, this examination of the events serves to highlight American scientific processes and the results of scientific interaction with the media and political bureaucracy.

  18. Brief communication "The magnitude 7.2 Bohol earthquake, Philippines"

    NASA Astrophysics Data System (ADS)

    Lagmay, A. M. F.; Eco, R.

    2014-03-01

    A devastating earthquake struck Bohol, Philippines on 15 October 2013. The earthquake originated at 12 km depth from an unmapped reverse fault, which manifested on the surface for several kilometers and with maximum vertical displacement of 3 m. The earthquake resulted in 222 fatalities with damage to infrastructure estimated at US52.06 million. Widespread landslides and sinkholes formed in the predominantly limestone region during the earthquake. These remain a significant threat to communities as destabilized hillside slopes, landslide-dammed rivers and incipient sinkholes are still vulnerable to collapse, triggered possibly by aftershocks and heavy rains in the upcoming months of November and December.

  19. An Exponential Detection Function to Describe Earthquake Frequency-Magnitude Distributions Below Completeness

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2011-12-01

    The capacity of a seismic network to detect small earthquakes can be evaluated by investigating the shape of the Frequency-Magnitude Distribution (FMD) of the resultant earthquake catalogue. The non-cumulative FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the number of events of magnitude m, exp(-βm) the Gutenberg-Richter law and q(m) a probability function. I propose an exponential detection function of the form q(m) = exp(κ(m-Mc)) for m < Mc with Mc the magnitude of completeness, magnitude at which N(m) is maximal. With Mc varying in space due to the heterogeneous distribution of seismic stations in a network, the bulk FMD of an earthquake catalogue corresponds to the sum of local FMDs with respective Mc(x,y), which leads to the gradual curvature of the bulk FMD below max(Mc(x,y)). More complicated FMD shapes are expected if the catalogue is derived from multiple network configurations. The model predictions are verified in the case of Southern California and Nevada. Only slight variations of the detection parameter k = κ/ln(10) are observed within a given region, with k = 3.84 ± 0.66 for Southern California and k = 2.84 ± 0.77 for Nevada, assuming Mc constant in 2° by 2° cells. Synthetic catalogues, which follow the exponential model, can reproduce reasonably well the FMDs observed for Southern California and Nevada by using only c. 15% of the total number of observed events. The proposed model has important implications in Mc mapping procedures and allows use of the full magnitude range for subsequent seismicity analyses.

  20. Earthquake prediction: Simple methods for complex phenomena

    NASA Astrophysics Data System (ADS)

    Luen, Bradley

    2010-09-01

    Earthquake predictions are often either based on stochastic models, or tested using stochastic models. Tests of predictions often tacitly assume predictions do not depend on past seismicity, which is false. We construct a naive predictor that, following each large earthquake, predicts another large earthquake will occur nearby soon. Because this "automatic alarm" strategy exploits clustering, it succeeds beyond "chance" according to a test that holds the predictions _xed. Some researchers try to remove clustering from earthquake catalogs and model the remaining events. There have been claims that the declustered catalogs are Poisson on the basis of statistical tests we show to be weak. Better tests show that declustered catalogs are not Poisson. In fact, there is evidence that events in declustered catalogs do not have exchangeable times given the locations, a necessary condition for the Poisson. If seismicity followed a stochastic process, an optimal predictor would turn on an alarm when the conditional intensity is high. The Epidemic-Type Aftershock (ETAS) model is a popular point process model that includes clustering. It has many parameters, but is still a simpli_cation of seismicity. Estimating the model is di_cult, and estimated parameters often give a non-stationary model. Even if the model is ETAS, temporal predictions based on the ETAS conditional intensity are not much better than those of magnitude-dependent automatic (MDA) alarms, a much simpler strategy with only one parameter instead of _ve. For a catalog of Southern Californian seismicity, ETAS predictions again o_er only slight improvement over MDA alarms

  1. Microearthquake networks and earthquake prediction

    USGS Publications Warehouse

    Lee, W.H.K.; Steward, S. W.

    1979-01-01

    A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.  

  2. Earthquake predictions using seismic velocity ratios

    USGS Publications Warehouse

    Sherburne, R. W.

    1979-01-01

    Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency. 

  3. Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock

    NASA Astrophysics Data System (ADS)

    Shcherbakov, R.

    2014-12-01

    Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and

  4. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  5. Earthquake prediction with electromagnetic phenomena

    SciTech Connect

    Hayakawa, Masashi

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  6. Dim prospects for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  7. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  8. Anomalous pre-seismic transmission of VHF-band radio waves resulting from large earthquakes, and its statistical relationship to magnitude of impending earthquakes

    NASA Astrophysics Data System (ADS)

    Moriya, T.; Mogi, T.; Takada, M.

    2010-02-01

    To confirm the relationship between anomalous transmission of VHF-band radio waves and impending earthquakes, we designed a new data-collection system and have documented the anomalous VHF-band radio-wave propagation beyond the line of sight prior to earthquakes since 2002 December in Hokkaido, northern Japan. Anomalous VHF-band radio waves were recorded before two large earthquakes, the Tokachi-oki earthquake (Mj = 8.0, Mj: magnitude defined by the Japan Meteorological Agency) on 2003 September 26 and the southern Rumoi sub-prefecture earthquake (Mj = 6.1) on 2004 December 14. Radio waves transmitted from a given FM radio station are considered to be scattered, such that they could be received by an observation station beyond the line of sight. A linear relationship was established between the logarithm of the total duration time of anomalous transmissions (Te) and the magnitude (M) or maximum seismic intensity (I) of the impending earthquake, for M4-M5 class earthquakes that occurred at depths of 48-54 km beneath the Hidaka Mountains in Hokkaido in 2004 June and 2005 August. Similar linear relationships are also valid for earthquakes that occurred at different depths. The relationship was shifted to longer Te for shallower earthquakes and to shorter Te for deeper ones. Numerous parameters seem to affect Te, including hypocenter depths and surface conditions of epicentral area (i.e. sea or land). This relationship is important because it means that pre-seismic anomalous transmission of VHF-band waves may be useful in predicting the size of an impending earthquake.

  9. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental

  10. Recent earthquake prediction research in Japan.

    PubMed

    Mogi, K

    1986-07-18

    Japan has experienced many major earthquake disasters in the past. Early in this century research began that was aimed at predicting the occurrence of earthquakes, and in 1965 an earthquake prediction program was started as a national project. In 1978 a program for constant monitoring and assessment was formally inaugurated with the goal of forecasting the major earthquake that is expected to occur in the near future in the Tokai district of central Honshu Island. The issue of predicting the anticipated Tokai earthquake is discussed in this article as well as the results of research on major recent earthquakes in Japan-the Izu earthquakes (1978 and 1980) and the Japan Sea earthquake (1983).

  11. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  12. Source time function properties indicate a strain drop independent of earthquake depth and magnitude

    NASA Astrophysics Data System (ADS)

    Vallee, Martin

    2014-05-01

    Movement of the tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other one. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, i.e. the ratio of seismic slip over the dimension of the ruptured fault. SCARDEC, a recently developed method, gives access to this information through the systematic determination of earthquakes source time functions (STFs). STFs describe the integrated spatio-temporal history of the earthquake process, and their maximum value can be related to the amount of stress or strain released during the earthquake. Here I analyse all earthquakes with magnitudes greater than 6 occurring in the last 20 years, and thus provide a catalogue of 1700 STFs which sample all the possible seismic depths. Analysis of this new database reveals that the strain drop remains on average the same for all earthquakes, independent of magnitude and depth. In other words, it is shown that, independent of the earthquake depth, magnitude 6 and larger earthquakes keep on average a similar ratio between seismic slip and dimension of the main slip patch. This invariance implies that deep earthquakes are even more similar than previously thought to their shallow counterparts, a puzzling finding as shallow and deep earthquakes should originate from different physical mechanisms. Concretely, the ratio between slip and patch dimension is on the order of 10-5-10-4, with extreme values only 8 times lower or larger at the 95% confidence interval. Besides the implications for mechanisms of deep earthquake generation, this limited variability has practical implications for realistic earthquake scenarios.

  13. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  14. The Magnitude 6.7 Northridge, California, Earthquake of January 17, 1994

    NASA Technical Reports Server (NTRS)

    Donnellan, A.

    1994-01-01

    The most damaging earthquake in the United States since 1906 struck northern Los Angeles on January 17.1994. The magnitude 6.7 Northridge earthquake produced a maximum of more than 3 meters of reverse (up-dip) slip on a south-dipping thrust fault rooted under the San Fernando Valley and projecting north under the Santa Susana Mountains.

  15. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  16. Soviet prediction of a major earthquake

    USGS Publications Warehouse

    Simpson, D.W.

    1979-01-01

    On November 1, 1978, a magnitude 7 earthquake occurred north of the Pamir Mountains near the Tadjiskistan-Kirghizia border, 150 kilometers east of Garm in Soviet Central Asia. Although the earthquake was felt in Tashkent, Dushanbe, and the Fergana Valley, the epicentral area was uninhabited at that time of year, and no damage was reported. 

  17. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  18. The 2002 Denali fault earthquake, Alaska: a large magnitude, slip-partitioned event.

    PubMed

    Eberhart-Phillips, Donna; Haeussler, Peter J; Freymueller, Jeffrey T; Frankel, Arthur D; Rubin, Charles M; Craw, Patricia; Ratchkovski, Natalia A; Anderson, Greg; Carver, Gary A; Crone, Anthony J; Dawson, Timothy E; Fletcher, Hilary; Hansen, Roger; Harp, Edwin L; Harris, Ruth A; Hill, David P; Hreinsdóttir, Sigrun; Jibson, Randall W; Jones, Lucile M; Kayen, Robert; Keefer, David K; Larsen, Christopher F; Moran, Seth C; Personius, Stephen F; Plafker, George; Sherrod, Brian; Sieh, Kerry; Sitar, Nicholas; Wallace, Wesley K

    2003-05-16

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  19. Prediction of Future Great Earthquake Locations from Cumulative Stresses Released by Prior Earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, J.; Hong, T. K.

    2014-12-01

    There are 17 great earthquakes with magnitude greater than or equal to 8.5 in the world since 1900. The great events cause significant damages to the humanity. The prediction of potential maximum magnitudes of earthquakes is important for seismic hazard mitigation. In this study, we calculate the Coulomb stress changes around the active plate margins for 507 events with magnitudes greater than 7.0 during 1976-2013 to estimate the cumulative stress releases. We investigate the spatio-temporal variations of ambient stress field from the cumulative Coulomb stress changes as a function of plate motion speed, plate age and dipping angle. It is observed that the largest stress drop occur in relatively high plate velocity in the convergent margins between Nazca and South American plates, between Pacific and North American plates, between Philippine Sea and Eurasian plates, and between Pacific and Australian plates. It is intriguing to note that the great earthquakes such as Tohoku-Oki earthquake and Maule earthquake occur in the highest plate velocity. On the other hand, the largest stress drop occur in the margins with relatively slow plate speeds such as the boundaries between Cocos and North American plates and between Indo-Australian and Eurasian plates. Earthquakes occur dominantly in the regions with positive Coulomb stress changes, suggesting that post-earthquakes are controlled by the stresses released from prior earthquakes. We find strong positive correlations between Coulomb stress changes and plate speeds. The observation suggests that large stress drop was controlled by high plate speed, suggesting possible prediction of potential maximum magnitudes of events.

  20. Spatiotemporal evolution of the completeness magnitude of the Icelandic earthquake catalogue from 1991 to 2013

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; Mignan, Arnaud; Vogfjörð, Kristin S.

    2016-11-01

    In 1991, a digital seismic monitoring network was installed in Iceland with a digital seismic system and automatic operation. After 20 years of operation, we explore for the first time its nationwide performance by analysing the spatiotemporal variations of the completeness magnitude. We use the Bayesian magnitude of completeness (BMC) method that combines local completeness magnitude observations with prior information based on the density of seismic stations. Additionally, we test the impact of earthquake location uncertainties on the BMC results, by filtering the catalogue using a multivariate analysis that identifies outliers in the hypocentre error distribution. We find that the entire North-to-South active rift zone shows a relatively low magnitude of completeness Mc in the range 0.5-1.0, highlighting the ability of the Icelandic network to detect small earthquakes. This work also demonstrates the influence of earthquake location uncertainties on the spatiotemporal magnitude of completeness analysis.

  1. Spatiotemporal evolution of the completeness magnitude of the Icelandic earthquake catalogue from 1991 to 2013

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; Mignan, Arnaud; Vogfjörð, Kristin S.

    2017-07-01

    In 1991, a digital seismic monitoring network was installed in Iceland with a digital seismic system and automatic operation. After 20 years of operation, we explore for the first time its nationwide performance by analysing the spatiotemporal variations of the completeness magnitude. We use the Bayesian magnitude of completeness (BMC) method that combines local completeness magnitude observations with prior information based on the density of seismic stations. Additionally, we test the impact of earthquake location uncertainties on the BMC results, by filtering the catalogue using a multivariate analysis that identifies outliers in the hypocentre error distribution. We find that the entire North-to-South active rift zone shows a relatively low magnitude of completeness Mc in the range 0.5-1.0, highlighting the ability of the Icelandic network to detect small earthquakes. This work also demonstrates the influence of earthquake location uncertainties on the spatiotemporal magnitude of completeness analysis.

  2. Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities

    USGS Publications Warehouse

    Duross, Christopher; Olig, Susan; Schwartz, David

    2015-01-01

    Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.

  3. Maximum earthquake magnitudes along different sections of the North Anatolian fault zone

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda

    2016-04-01

    Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.

  4. Anomalous ULF signals and their possibility to estimate the earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Armansyah, Ahadi, Suadi

    2017-07-01

    Ultra Low Frequency geomagnetic data were observed for several days prior to the occurance of an earthquake. The earthquake investigated was located within Indonesian territory, Jayapura Regency-Papua Province, with the distance from the epicenter and depth less than 50 km. The magnitude of the earthquake investigated was 4earthquake magnitude. The geomagnetic data were processed using polarization power ratio Z/H method to detect ULF anomalies as earthquake precursors. The research yielded interesting processing and analysis results showing that there was a strong correlation between the earthquake magnitude and ULF amplitude anomalies by 0.852. This correlation suggests that there is a possibility of estimating the magnitude of an earthquake that is going to occur based on the power ratio Z/H amplitude anomalies detected.

  5. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  6. Scaling relation between earthquake magnitude and the departure time from P-wave similar growth: Application to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Noda, S.; Ellsworth, W. L.

    2016-12-01

    The magnitude of an earthquake (M) for earthquake early warning (EEW), is typically estimated from the P-wave displacement using a relation between M and displacement amplitude (that is, a ground motion prediction equation). In the conventional approach the final M cannot be estimated until the peak amplitude is observed. To overcome this technical limitation, we introduce a new scaling relation between M and a characteristic of initial P-wave displacement. We use Japanese K-NET data from 150 events with 4.5 ≤ Mw ≤ 9.0 and hypocentral distance (R) less than 200 km. The data are binned by 0.1 magnitude units and by 25 km in hypocentral distance. We retain bins with at least 5 observations and measure the average of absolute displacement (AAD) in each bin as a function of time. We find that there is no statistical difference in AAD between smaller and larger earthquakes for early times (< 0.2 s), suggesting that the observed P wave begins in a similar way. However, AAD for smaller events departs from the similarity sooner than large events. Consequntly, we define the departure time (Tdp) as the first decrease in absolute displacement after the P onset. For the K-NET data the relation between Mw and Tdp is Mw = 2.29 × logTdp + 5.95 in the magnitude range of 4.5 ≤ Mw ≤ 7. Note that Tdp is much shorter than the typical source duration. This suggests that it is unnecessary to wait for the arrival of the peak amplitude to estimate the final M because the displacement scales with the final M after Tdp. Based on this observation, we derive a new estimator for M based on AAD measurements made up to time Tdp(M). Retrospective application of this equation provides faster determination of M than the conventional approach without loss of accuracy. We conclude that the proposed approach is useful to reduce the blind zone for EEW.

  7. An Updated Catalog of Taiwan Earthquakes (1900-2011) with Homogenized Mw Magnitudes

    NASA Astrophysics Data System (ADS)

    Chen, K.; Tsai, Y.; Chang, W.

    2012-12-01

    A complete and consistent catalog of earthquakes can provide good data for studying the distribution of earthquakes in a region as function of space, time and magnitude. Therefore, it is a basic tool for studying seismic hazard and mitigating hazard, and we can get the seismicity with magnitude equal to or greater than Mw from the data set. In the article for completeness and consistence, we apply a catalog of earthquakes from 1900 to 2006 with homogenized magnitude (Mw) (Chen and Tsai, 2008) as a base, and we also refer to the Hsu (1989) to incorporate available supplementary data (total 188 data) for the period 1900-1935, the supplementary data lead the cutoff threshold magnitude to be from Mw 5.5 down to 5.0, this indicates that we add the additional data has enriched the magnitude > 5.0 content. For this study, the catalog has been updated to include earthquakes up to 2011, and it is complete for Mw > 5.0, this will increase the reliability for studying seismic hazard. It is found that it is saturated for original catalog of Taiwan earthquakes compared with Harvard Mw or USGS M for magnitude > 6.5. Although, we modified the original catalog into seismic moment magnitude Mw, it still does not overcome the drawback. But, it is found for Mw < 6.5, our unified Mw are most greater than Harvard Mw or USGS M, the phenomenon indicates our unified Mw to supplement the gap above magnitude > 6.0 and somewhere magnitude > 5.5 during the time period 1973-1991 for original catalog. Therefore, it is better with Mw to report the earthquake magnitude.

  8. Location and magnitudes of earthquakes in Central Asia from seismic intensity data: model calibration and validation

    NASA Astrophysics Data System (ADS)

    Bindi, Dino; Capera, Augusto A. Gómez; Parolai, Stefano; Abdrakhmatov, Kanatbek; Stucchi, Massimiliano; Zschau, Jochen

    2013-02-01

    In this study, we estimate the location and magnitude of Central Asian earthquake from macroseismic intensity data. A set of 2373 intensity observations from 15 earthquakes is analysed to calibrate non-parametric models for the source and attenuation with distance, the distance being computed from the instrumental epicentres located according to the International Seismological Centre (ISC) catalogue. In a second step, the non-parametric source model is regressed against different magnitude values (e.g. MLH, mb, MS, Mw) as listed in various instrumental catalogues. The reliability of the calibrated model is then assessed by applying the methodology to macroseismic intensity data from 29 validation earthquakes for which both MLH and mb are available from the Central Asian Seismic Risk Initiative (CASRI) project and the ISC catalogue. An overall agreement is found for both the location and magnitude of these events, with the distribution of the differences between instrumental and intensity-based magnitudes having almost a zero mean, and standard deviations equal to 0.30 and 0.44 for mb and MLH, respectively. The largest discrepancies are observed for the location of the 1985, MLH = 7.0 southern Xinjiang earthquake, whose location is outside the area covered by the intensity assignments, and for the magnitude of the 1974, mb = 6.2 Markansu earthquake, which shows a difference in magnitude greater than one unit in terms of MLH. Finally, the relationships calibrated for the non-parametric source model are applied to assign different magnitude-scale values to earthquakes that lack instrumental information. In particular, an intensity-based moment magnitude is assigned to all of the validation earthquakes.

  9. Influence of fault heterogeneity on the frequency-magnitude statistics of earthquake cycle simulations

    NASA Astrophysics Data System (ADS)

    Norbeck, Jack; Horne, Roland

    2017-04-01

    Numerical models are useful tools for investigating natural geologic conditions can affect seismicity, but it can often be difficult to generate realistic earthquake sequences using physics-based earthquake rupture models. Rate-and-state earthquake cycle simulations on planar faults with homogeneous frictional properties and stress conditions typically yield single event sequences with a single earthquake magnitude characteristic of the size of the fault. In reality, earthquake sequences have been observed to follow a Gutenberg-Richter-type frequency-magnitude distribution that can be characterized by a power law scaling relationship. The purpose of this study was to determine how fault heterogeneity can affect the frequency-magnitude distribution of simulated earthquake events. We considered the effects fault heterogeneity at two different length-scales by performing numerical earthquake rupture simulations within a rate-and-state friction framework. In our first study, we investigated how heterogeneous, fractal distributions of shear and normal stress resolved along a two-dimensional fault surface influenced the earthquake nucleation, rupture, and arrest processes. We generated a catalog of earthquake events by performing earthquake cycle simulations for 90 random realizations of fractal stress distributions. Typical realizations produced between 4 to 6 individual earthquakes ranging in event magnitudes between those characteristic of the minimum patch size for nucleation and the size of the model fault. The resulting aggregate frequency-magnitude distributions were characterized well by a power-law scaling behavior. In our second study, we performed simulations of injection-induced seismicity using a coupled fluid flow and rate-and-state earthquake model. Fluid flow in a two-dimensional reservoir was modeled, and the fault mechanics was modeled under a plane strain assumption (i.e., one-dimensional faults). We generated a set of faults with an average strike of

  10. The ethics of earthquake prediction.

    PubMed

    Sol, Ayhan; Turan, Halil

    2004-10-01

    Scientists' responsibility to inform the public about their results may conflict with their responsibility not to cause social disturbance by the communication of these results. A study of the well-known Brady-Spence and Iben Browning earthquake predictions illustrates this conflict in the publication of scientifically unwarranted predictions. Furthermore, a public policy that considers public sensitivity caused by such publications as an opportunity to promote public awareness is ethically problematic from (i) a refined consequentialist point of view that any means cannot be justified by any ends, and (ii) a rights view according to which individuals should never be treated as a mere means to ends. The Parkfield experiment, the so-called paradigm case of cooperation between natural and social scientists and the political authorities in hazard management and risk communication, is also open to similar ethical criticism. For the people in the Parkfield area were not informed that the whole experiment was based on a contested seismological paradigm.

  11. The magnitude 6.7 Northridge, California, earthquake of 17 January 1994

    USGS Publications Warehouse

    Jones, L.; Aki, K.; Boore, D.; Celebi, M.; Donnellan, A.; Hall, J.; Harris, R.; Hauksson, E.; Heaton, T.; Hough, S.; Hudnut, K.; Hutton, K.; Johnston, M.; Joyner, W.; Kanamori, H.; Marshall, G.; Michael, A.; Mori, J.; Murray, M.; Ponti, D.; Reasenberg, P.; Schwartz, D.; Seeber, L.; Shakal, A.; Simpson, R.; Thio, H.; Tinsley, J.; Todorovska, M.; Trifunac, M.; Wald, D.; Zoback, M.L.

    1994-01-01

    The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.

  12. Seismic Safety Margins Research Program. Regional relationships among earthquake magnitude scales

    SciTech Connect

    Chung, D. H.; Bernreuter, D. L.

    1980-05-01

    The seismic body-wave magnitude m{sub b} of an earthquake is strongly affected by regional variations in the Q structure, composition, and physical state within the earth. Therefore, because of differences in attenuation of P-waves between the western and eastern United States, a problem arises when comparing m{sub b}'s for the two regions. A regional m/sub b/ magnitude bias exists which, depending on where the earthquake occurs and where the P-waves are recorded, can lead to magnitude errors as large as one-third unit. There is also a significant difference between m{sub b} and M{sub L} values for earthquakes in the western United States. An empirical link between the m{sub b} of an eastern US earthquake and the M{sub L} of an equivalent western earthquake is given by M{sub L} = 0.57 + 0.92(m{sub b}){sub East}. This result is important when comparing ground motion between the two regions and for choosing a set of real western US earthquake records to represent eastern earthquakes. 48 refs., 5 figs., 2 tabs.

  13. Earthquake Prediction: Is It Better Not to Know?

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Discusses economic, social and political consequences of earthquake prediction. Reviews impact of prediction on China's recent (February, 1975) earthquake. Diagrams a chain of likely economic consequences from predicting an earthquake. (CS)

  14. Earthquake Prediction: Is It Better Not to Know?

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Discusses economic, social and political consequences of earthquake prediction. Reviews impact of prediction on China's recent (February, 1975) earthquake. Diagrams a chain of likely economic consequences from predicting an earthquake. (CS)

  15. The Magnitude Frequency Distribution of Induced Earthquakes and Its Implications for Crustal Heterogeneity and Hazard

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.

    2015-12-01

    Earthquake activity in the central United States has increased dramatically since 2009, principally driven by injection of wastewater coproduced with oil and gas. The elevation of pore pressure from the collective influence of many disposal wells has created an unintended experiment that probes both the state of stress and architecture of the fluid plumbing and fault systems through the earthquakes it induces. These earthquakes primarily release tectonic stress rather than accommodation stresses from injection. Results to date suggest that the aggregated magnitude-frequency distribution (MFD) of these earthquakes differs from natural tectonic earthquakes in the same region for which the b-value is ~1.0. In Kansas, Oklahoma and Texas alone, more than 1100 earthquakes Mw ≥3 occurred between January 2014 and June 2015 but only 32 were Mw ≥ 4 and none were as large as Mw 5. Why is this so? Either the b-value is high (> 1.5) or the magnitude-frequency distribution (MFD) deviates from log-linear form at large magnitude. Where catalogs from local networks are available, such as in southern Kansas, b-values are normal (~1.0) for small magnitude events (M < 3). The deficit in larger-magnitude events could be an artifact of a short observation period, or could reflect a decreased potential for large earthquakes. According to the prevailing paradigm, injection will induce an earthquake when (1) the pressure change encounters a preexisting fault favorably oriented in the tectonic stress field; and (2) the pore-pressure perturbation at the hypocenter is sufficient to overcome the frictional strength of the fault. Most induced earthquakes occur where the injection pressure has attenuated to a small fraction of the seismic stress drop implying that the nucleation point was highly stressed. The population statistics of faults satisfying (1) could be the cause of this MFD if there are many small faults (dimension < 1 km) and few large ones in a critically stressed crust

  16. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  17. Model parameter estimation bias induced by earthquake magnitude cut-off

    NASA Astrophysics Data System (ADS)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  18. Tectonic summaries of magnitude 7 and greater earthquakes from 2000 to 2015

    USGS Publications Warehouse

    Hayes, Gavin P.; Meyers, Emma K.; Dewey, James W.; Briggs, Richard W.; Earle, Paul S.; Benz, Harley M.; Smoczyk, Gregory M.; Flamme, Hanna E.; Barnhart, William D.; Gold, Ryan D.; Furlong, Kevin P.

    2017-01-11

    This paper describes the tectonic summaries for all magnitude 7 and larger earthquakes in the period 2000–2015, as produced by the U.S. Geological Survey National Earthquake Information Center during their routine response operations to global earthquakes. The goal of such summaries is to provide important event-specific information to the public rapidly and concisely, such that recent earthquakes can be understood within a global and regional seismotectonic framework. We compile these summaries here to provide a long-term archive for this information, and so that the variability in tectonic setting and earthquake history from region to region, and sometimes within a given region, can be more clearly understood.

  19. Research in earthquake prediction - the Parkfield prediction experiment

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    The 15-mile-long Parkfield, California, section of the Sam Andreas fault is the best understood earthquake source region in the world. Moderate-sized earthquakes of local magnitude 5 3/4 occurred at Parkfield in 1881, 1901, 1922, 1934, and 1966.

  20. How to assess magnitudes of paleo-earthquakes from multiple observations

    NASA Astrophysics Data System (ADS)

    Hintersberger, Esther; Decker, Kurt

    2016-04-01

    An important aspect of fault characterisation regarding seismic hazard assessment are paleo-earthquake magnitudes. Especially in regions with low or moderate seismicity, paleo-magnitudes are normally much larger than those of historical earthquakes and therefore provide essential information about seismic potential and expected maximum magnitudes of a certain region. In general, these paleo-earthquake magnitudes are based either on surface rupture length or on surface displacement observed at trenching sites. Several well-established correlations provide the possibility to link the observed surface displacement to a certain magnitude. However, the combination of more than one observation is still rare and not well established. We present here a method based on a probabilistic approach proposed by Biasi and Weldon (2006) to combine several observations to better constrain the possible magnitude range of a paleo-earthquake. Extrapolating the approach of Biasi and Weldon (2006), the single-observation probability density functions (PDF) are assumed to be independent of each other. Following this line, the common PDF for all observed surface displacements generated by one earthquake is the product of all single-displacement PDFs. In order to test our method, we use surface displacement data for modern earthquakes, where magnitudes have been determined by instrumental records. For randomly selected "observations", we calculated the associated PDFs for each "observation point". We then combined the PDFs into one common PDF for an increasing number of "observations". Plotting the most probable magnitudes against the number of combined "observations", the resultant range of most probable magnitudes is very close to the magnitude derived by instrumental methods. Testing our method with real trenching observations, we used the results of a paleoseismological investigation within the Vienna Pull-Apart Basin (Austria), where three trenches were opened along the normal

  1. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not

  2. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  3. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)

    NASA Astrophysics Data System (ADS)

    Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-06-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding

  4. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  5. Intensity, magnitude, location and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, Walter; Hough, Susan; Martin, Stacey; Bilham, Roger

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earthquakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental- with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earthquakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  6. Intensity, magnitude, location, and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, W.; Hough, S.; Martin, S.; Bilham, R.

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earth-quakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental-with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earth-quakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  7. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  8. Automated determination of magnitude and source length of large earthquakes using backprojection and P wave amplitudes

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Kawakatsu, Hitoshi; Zhuang, Jiancang; Mori, Jim; Maeda, Takuto; Tsuruoka, Hiroshi; Zhao, Xu

    2017-06-01

    Fast estimates of magnitude and source extent of large earthquakes are fundamental for disaster mitigation. However, resolving these estimates within 10-20 min after origin time remains challenging. Here we propose a robust algorithm to resolve magnitude and source length of large earthquakes using seismic data recorded by regional arrays and global stations. We estimate source length and source duration by backprojecting seismic array data. Then the source duration and the maximum amplitude of the teleseismic P wave displacement waveforms are used jointly to estimate magnitude. We apply this method to 74 shallow earthquakes that occurred within epicentral distances of 30-85° to Hi-net (2004-2014). The estimated magnitudes are similar to moment magnitudes estimated from W-phase inversions (U.S. Geological Survey), with standard deviations of 0.14-0.19 depending on the global station distributions. Application of this method to multiple regional seismic arrays could benefit tsunami warning systems and emergency response to large global earthquakes.

  9. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  10. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  11. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  12. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    NASA Astrophysics Data System (ADS)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  13. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake

    USGS Publications Warehouse

    Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.

    1993-01-01

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).

  14. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  15. Reconstructing the magnitude for Earth's greatest earthquakes with microfossil measures of sudden coastal subsidence

    NASA Astrophysics Data System (ADS)

    Engelhart, S. E.; Horton, B. P.; Nelson, A. R.; Wang, K.; Wang, P.; Witter, R. C.; Hawkes, A.

    2012-12-01

    Tidal marsh sediments in estuaries along the Cascadia coast archive stratigraphic evidence of Holocene great earthquakes (magnitude 8-9) that record abrupt relative sea-level (RSL) changes. Quantitative microfossil-based RSL reconstructions produce precise estimates of sudden coastal subsidence or uplift during great earthquakes because of the strong relationship between species distributions and elevation within the intertidal zone. We have developed a regional foraminiferal-based transfer function that is validated against simulated coseismic subsidence from a marsh transplant experiment, demonstrating accuracy to within 5 cm. Two case studies demonstrate the utility of high-precision microfossil-based RSL reconstructions at the Cascadia subduction zone. One approach in early Cascadia paleoseismic research was to describe the stratigraphic evidence of the great AD 1700 earthquake and then assume that earlier earthquakes were of similar magnitude. All but the most recent (transfer function) estimates of the amount of coseismic subsidence at Cascadia are too imprecise (errors of >±0.5 m) to distinguish, for example, coseismic from postseismic land-level movements, or to infer differences in amounts of subsidence or uplift from one earthquake cycle to the next. Reconstructions of RSL rise from stratigraphic records at multiple locations for the four most recent earthquake cycles show variability in the amount of coseismic subsidence. The penultimate earthquake at Siletz Bay around 800 to 900 years ago produced one-third of the coseismic subsidence produced in AD 1700. Most earthquake rupture models used a uniform-slip distribution along the megathrust to explain poorly constrained paleoseismic estimates of coastal subsidence during the AD 1700 Cascadia earthquake. Here, we test models of heterogeneous slip for the AD 1700 Cascadia earthquake that are similar to slip distribution inferred for instrumentally recorded great subduction earthquakes worldwide. We use

  16. Characteristics of Gyeongju earthquake, moment magnitude 5.5 and relative relocations of aftershocks

    NASA Astrophysics Data System (ADS)

    Cho, ChangSoo; Son, Minkyung

    2017-04-01

    There is low seismicity in the korea peninsula. According historical record in the historic book, There were several strong earthquake in the korea peninsula. Especially in Gyeongju of capital city of the Silla dynasty, few strong earthquakes caused the fatalities of several hundreds people 1,300 years ago and damaged the houses and make the wall of castles collapsed. Moderate strong earthquake of moment magnitude 5.5 hit the city in September 12, 2016. Over 1000 aftershocks were detected. The numbers of occurrences of aftershock over time follows omori's law well. The distribution of relative locations of 561 events using clustering aftershocks by cross-correlation between P and S waveform of the events showed the strike NNE 25 30 o and dip 68 74o of fault plane to cause the earthquake matched with the fault plane solution of moment tensor inversion well. The depth of range of the events is from 11km to 16km. The width of distribution of event locations is about 5km length. The direction of maximum horizontal stress by inversion of stress for the moment solutions of main event and large aftershocks is similar to the known maximum horizontal stress direction of the korea peninsula. The relation curves between moment magnitude and local magnitude of aftershocks shows that the moment magnitude increases slightly more for events of size less than 2.0

  17. Magnitude and Surface Rupture Length of Prehistoric Upper Crustal Earthquakes in the Puget Lowland, Washington State

    NASA Astrophysics Data System (ADS)

    Sherrod, B. L.; Styron, R. H.

    2016-12-01

    Paleoseismic studies documented prehistoric earthquakes after the last glaciation ended 15 ka on 13 upper-crustal fault zones in the Cascadia fore arc. These fault zones are a consequence of north-directed fore arc block migration manifesting as a series of bedrock uplifts and intervening structural basins in the southern Salish Sea lowland between Vancouver, B.C. to the north and Olympia, WA to the south, and bounded on the east and west by the Cascade Mountains and Olympic Mountains, respectively. Our dataset uses published information and includes 27 earthquakes tabulated from observations of postglacial deformation at 63 sites. Stratigraphic offsets along faults consist of two types of measurements: 1) vertical separation of strata along faults observed in fault scarp excavations, and 2) estimates from coastal uplift and subsidence. We used probabilistic methods to estimate past rupture magnitudes and surface rupture length (SRL), applying empirical observations from modern earthquakes and point measurements from paleoseismic sites (Biasi and Weldon, 2006). Estimates of paleoearthquake magnitude ranged between M 6.5 and M 7.5. SRL estimates varied between 20 and 90 km. Paleoearthquakes on the Seattle fault zone and Saddle Mountain West fault about 1100 years ago were outliers in our analysis. Large offsets observed for these two earthquakes implies a M 7.8 and 200 km SRL, given the average observed ratio of slip/SRL in modern earthquakes. The actual mapped traces of these faults are less than 200km, implying these earthquakes had an unusually high static stress drop or, in the case of the Seattle fault, splay faults may have accentuated uplift in the hanging wall. Refined calculations incorporating fault area may change these magnitude and SRL estimates. Biasi, G.P., and Weldon, R.J., 2006, Estimating Surface Rupture Length and Magnitude of Paleoearthquakes from Point Measurements of Rupture Displacement: B. Seismol. Soc. Am., 96, 1612-1623.

  18. Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.

    2016-12-01

    During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.

  19. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  20. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed Central

    Tullis, T E

    1996-01-01

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668

  1. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed

    Tullis, T E

    1996-04-30

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks.

  2. Foreshocks Are Not Predictive of Future Earthquake Size

    NASA Astrophysics Data System (ADS)

    Page, M. T.; Felzer, K. R.; Michael, A. J.

    2014-12-01

    The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. A number of predictions can be made from the cascade model, including the fraction of earthquakes that are foreshocks to larger events, the distribution of differences between foreshock and mainshock magnitudes, and the distribution of time lags between foreshocks and mainshocks. The last should follow the inverse Omori law, which will cause the appearance of an accelerating seismicity rate if multiple foreshock sequences are stacked (Helmstetter and Sornette, 2003). All of these predictions are consistent with observations (Helmstetter and Sornette, 2003; Felzer et al. 2004). If foreshocks were to scale with mainshock size, this would be strong evidence against the cascade model. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results. In particular, we find that their catalogs are not complete to the level assumed, and their ETAS model underestimates inverse Omori behavior. To conclude, seismicity data to date is consistent with the hypothesis that the nucleation process is the same for earthquakes of all sizes.

  3. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  4. Real-time magnitude estimation and rapid fault characterization with GPS data for Earthquake Early Warning applications

    NASA Astrophysics Data System (ADS)

    Colombelli, S.; Allen, R. M.; Zollo, A.

    2012-12-01

    The combined use of seismic and geodetic observations is now a common practice for finite-fault modeling and seismic source parametrization. With the advent of high-rate 1Hz GPS stations the seismological community has begun to look at ways to include GPS data in Earthquake Early Warning (EEW) algorithms. GPS stations record ground displacement without any risk of saturating or need for baseline or other corrections. Thus, geodetic displacement timeseries complement the high-frequency information provided by seismic data. In the standard approaches to early warning, the initial portion of the P-wave signal is used to rapidly characterize the earthquake magnitude and to predict the expected ground shaking at a target site, before damaging waves arrive. Whether the final magnitude of an earthquake can be predicted while the rupture process is underway, still represents a controversial issue; the point is that the limitations of the standard approaches when applied to giant earthquakes have become evident after the experience of the Mw 9.0, 2011 Tohoku-Oki earthquake. Here we explore the application of GPS data to EEW and investigate whether the co-seismic ground deformation can be used to provide fast and reliable magnitude estimations. We implemented an algorithm to extract the permanent static offset from GPS displacement timeseries; the static displacement is then used to invert for the slip distribution on the fault plane, using a constant-slip, rectangular source embedded in a homogeneous half-space. We developed an efficient real-time static slip inversion scheme for both the rapid determination of the event size and for the near real-time estimation of the rupture area. This would allow for a correct evaluation of the expected ground shaking at the target sites, which represents, without doubt, the most important aspect of the practical implementation of an early warning system and the most relevant information to be provided to non-expert end-users. The

  5. Statistical relations among earthquake magnitude, surface rupture length, and surface fault displacement

    USGS Publications Warehouse

    Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.

    1984-01-01

    In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement

  6. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  7. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  8. Earthquake Declustering via a Nearest-Neighbor Approach in Space-Time-Magnitude Domain

    NASA Astrophysics Data System (ADS)

    Zaliapin, I. V.; Ben-Zion, Y.

    2016-12-01

    We propose a new method for earthquake declustering based on nearest-neighbor analysis of earthquakes in space-time-magnitude domain. The nearest-neighbor approach was recently applied to a variety of seismological problems that validate the general utility of the technique and reveal the existence of several different robust types of earthquake clusters. Notably, it was demonstrated that clustering associated with the largest earthquakes is statistically different from that of small-to-medium events. In particular, the characteristic bimodality of the nearest-neighbor distances that helps separating clustered and background events is often violated after the largest earthquakes in their vicinity, which is dominated by triggered events. This prevents using a simple threshold between the two modes of the nearest-neighbor distance distribution for declustering. The current study resolves this problem hence extending the nearest-neighbor approach to the problem of earthquake declustering. The proposed technique is applied to seismicity of different areas in California (San Jacinto, Coso, Salton Sea, Parkfield, Ventura, Mojave, etc.), as well as to the global seismicity, to demonstrate its stability and efficiency in treating various clustering types. The results are compared with those of alternative declustering methods.

  9. Numerical simulation of the Kamaishi repeating earthquake sequence: Change in magnitude due to the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Yoshida, Shingo; Kato, Naoyuki; Fukuda, Jun'ichi

    2015-05-01

    We conducted numerical simulations of a repeating earthquake sequence on the plate boundary off the shore of Kamaishi, northeastern Japan, assuming rate- and state-dependent friction laws. Uchida et al. 2014 reported that the Kamaishi repeating earthquakes showed an increase in magnitude and a decrease in recurrence interval after the 2011 Tohoku-oki earthquake (M9), but an approximately constant magnitude (M ~ 4.9) and a regular recurrence interval (~ 5.5 years) before the Tohoku-oki earthquake. A M5.9 event occurred just after the M9 event, and was followed by a M5.5 event. We considered a fault patch of velocity-weakening friction, with frictional parameters leading to seismic slip confined in its central part of the patch. Afterslip due to the M9 event was involved in the model to increase the loading rate on the patch. The simulation successfully reproduced increasing magnitude and decreasing recurrence time caused by the afterslip. A M6 class event, in which seismic slip spread over the entire area of the patch, occurred just after the M9 event for the aging law and the Nagata law. When we assumed the aging law with frictional parameters which lie near the boundary between leading to slip on the entire patch and slip confined in its central part, the M6 class event was followed by a M5.5 class event. Furthermore we examined a conditionally stable large patch that contained a small unstable patch. This model also reproduced a M6 class event after the M9 event. In these models, stress outside the confined area of the path is released before a dynamic event under a constant low loading rate, whereas the stress perturbation due to afterslip within the seismic cycle induces a dynamic event on the entire patch.

  10. The role of the Federal government in the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Filson, J.R.

    1988-01-01

    Earthquake prediction research in the United States us carried out under the aegis of the National Earthquake Hazards Reduction Act of 1977. One of the objectives of the act is "the implementation in all areas of high or moderate seismic risk, of a system (including personnel and procedures) for predicting damaging earthquakes and for identifying, evaluating, and accurately characterizing seismic hazards." Among the four Federal agencies working under the 1977 act, the U.S Geological Survey (USGS) is responsible for earthquake prediction research and technological implementation. The USGS has adopted a goal that is stated quite simply; predict the time, place, and magnitude of damaging earthquakes. The Parkfield earthquake prediction experiment represents the msot concentrated and visible effor to date to test progress toward this goal. 

  11. Seismomagnetic observation during the 8 July 1986 magnitude 5.9 North Palm Springs earthquake

    USGS Publications Warehouse

    Johnston, M.J.S.; Mueller, R.J.

    1987-01-01

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observations are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.

  12. HYPOELLIPSE; a computer program for determining local earthquake hypocentral parameters, magnitude, and first-motion pattern

    USGS Publications Warehouse

    Lahr, John C.

    1999-01-01

    This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.

  13. Seismomagnetic observation during the 8 july 1986 magnitude 5.9 north palm springs earthquake.

    PubMed

    Johnston, M J; Mueller, R J

    1987-09-04

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observation are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.

  14. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  15. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    NASA Astrophysics Data System (ADS)

    Noda, Shunta; Ellsworth, William L.

    2016-09-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  16. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  17. Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area

    NASA Astrophysics Data System (ADS)

    Alarifi, A. S.; Alarifi, N. S.

    2009-12-01

    Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis

  18. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  19. Analytical Conditions for Compact Earthquake Prediction Approaches

    NASA Astrophysics Data System (ADS)

    Sengor, T.

    2009-04-01

    This paper concerns itself with The atmosphere and ionosphere include non-uniform electric charge and current distributions during the earthquake activity. These charges and currents move irregularly when an activity is scheduled for an earthquake at the future. The electromagnetic characteristics of the region over the earth change to domains where irregular transportations of non-uniform electric charges are observed; therefore, the electromagnetism in the plasma, which moves irregularly and contains non-uniform charge distributions, is studied. These cases of charge distributions are called irregular and non-uniform plasmas. It is called the seismo-plasma if irregular and non-uniform plasma defines a real earthquake activity, which will come to truth. Some signals involving the above-mentioned coupling effects generate some analytical conditions giving the predictability of seismic processes [1]-[5]. These conditions will be discussed in this paper. 2 References [1] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes," IUGG Perugia 2007. [2] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes for Marmara Sea earthquakes," EGU 2008. [3] T. Sengor, "On the exact interaction mechanism of electromagnetically generated phenomena with significant earthquakes and the observations related the exact predictions before the significant earthquakes at July 1999-May 2000 period," Helsinki Univ. Tech. Electrom. Lab. Rept. 368, May 2001. [4] T. Sengor, "The Observational Findings Before The Great Earthquakes Of December 2004 And The Mechanism Extraction From Associated Electromagnetic Phenomena," Book of XXVIIIth URSI GA 2005, pp. 191, EGH.9 (01443) and Proceedings 2005 CD, New Delhi, India, Oct. 23-29, 2005. [5] T. Sengor, "The interaction mechanism among electromagnetic phenomena and geophysical-seismic-ionospheric phenomena with extraction for exact earthquake prediction genetics," 10

  20. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  1. Earthquake frequency-magnitude distribution and fractal dimension in mainland Southeast Asia

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi; Choowong, Montri

    2014-12-01

    The 2004 Sumatra and 2011 Tohoku earthquakes highlighted the need for a more accurate understanding of earthquake characteristics in both regions. In this study, both the a and b values of the frequency-magnitude distribution (FMD) and the fractal dimension ( D C ) were investigated simultaneously from 13 seismic source zones recognized in mainland Southeast Asia (MLSEA). By using the completeness earthquake dataset, the calculated values of b and D C were found to imply variations in seismotectonic stress. The relationships of D C -b and D C -( a/ b) were investigated to categorize the level of earthquake hazards of individual seismic source zones, where the calibration curves illustrate a negative correlation between the D C and b values ( D c = 2.80 - 1.22 b) and a positive correlation between the D C and a/ b ratios ( D c = 0.27( a/ b) - 0.01) with similar regression coefficients ( R 2 = 0.65 to 0.68) for both regressions. According to the obtained relationships, the Hsenwi-Nanting and Red River fault zones revealed low-stress accumulations. Conversely, the Sumatra-Andaman interplate and intraslab, the Andaman Basin, and the Sumatra fault zone were defined as high-tectonic stress regions that may pose risks of generating large earthquakes in the future.

  2. A moment-tensor catalog for intermediate magnitude earthquakes in Mexico

    NASA Astrophysics Data System (ADS)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo

    2016-04-01

    Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism

  3. Intermediate- and long-term earthquake prediction.

    PubMed

    Sykes, L R

    1996-04-30

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study.

  4. Intermediate- and long-term earthquake prediction.

    PubMed Central

    Sykes, L R

    1996-01-01

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. Images Fig. 1 PMID:11607658

  5. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    USGS Publications Warehouse

    Barton, D.J.; Foulger, G.R.; Henderson, J.R.; Julian, B.R.

    1999-01-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  6. Estimating locations and magnitudes of earthquakes in southern California from modified Mercalli intensities

    USGS Publications Warehouse

    Bakun, W.H.

    2006-01-01

    Modified Mercalli intensity (MMI) assignments, instrumental moment magnitudes M, and epicenter locations of thirteen 5.6 ??? M ??? 7.1 "training-set" events in southern California were used to obtain the attenuation relation MMI = 1.64 + 1.41M - 0.00526 * ??h - 2.63 * log ??h, where ??h is the hypocentral distance in kilometers and M is moment magnitude. Intensity magnitudes MI and locations for five 5.9 ??? M ??? 7.3 independent test events were consistent with the instrumental source parameters. Fourteen "historical" earthquakes between 1890 and 1927 were then analyzed. Of particular interest are the MI 7.2 9 February 1890 and MI 6.6 28 May 1892 earthquakes, which were previously assumed to have occurred near the southern San Jacinto fault; a more likely location is in the Eastern California Shear Zone (ECSZ). These events, and the 1992 M 7.3 Landers and 1999 M 7.1 Hector Mine events, suggest that the ECSZ has been seismically active since at least the end of the nineteenth century. The earthquake catalog completeness level in the ECSZ is ???M 6.5 at least until the early twentieth century.

  7. Reported geomagnetic and ionospheric precursors to earthquakes: Summary, reanalysis, and implications for short-term prediction

    NASA Astrophysics Data System (ADS)

    Thomas, J. N.; Masci, F.; Love, J. J.; Johnston, M. J.

    2012-12-01

    Earthquakes are one of the most devastating natural phenomena on earth, causing high deaths tolls and large financial losses each year. If precursory signals could be regularly and reliably identified, then the hazardous effects of earthquakes might be mitigated. Unfortunately, it is not at all clear that short-term earthquake prediction is either possible or practical, and the entire subject remains controversial. Still, many claims of successful earthquake precursor observations have been published, and among these are reports of geomagnetic and ionospheric anomalies prior to earthquake occurrence. Given the importance of earthquake prediction, reports of earthquake precursors need to be analyzed and checked for reliability and reproducibility. We have done this for numerous such reports, including the Loma Prieta, Guam, Hector Mine, Tohoku, and L'Aquila earthquakes. We have found that these reported earthquake precursors: 1) often lack time series observations from long before and long after the earthquakes and near and far from the earthquakes, 2) are not statistically correlated with the earthquakes and do not relate to the earthquake source mechanisms, 3) are not followed by similar, but much larger, signals during the subsequent earthquake when the primary energy release occurs, 4) are nonuniform in that they occur at different spatial and temporal regimes relative to the earthquakes and with different magnitudes and frequencies, and 5) can often be explained by other non-earthquake related mechanisms or normal geomagnetic activity. Thus we conclude that these reported precursors could not be used to predict the time or location of the earthquakes. Based on our findings, we suggest a protocol for examining precursory reports, something that will help guide future research in this area.

  8. Prospects for earthquake prediction and control

    USGS Publications Warehouse

    Healy, J.H.; Lee, W.H.K.; Pakiser, L.C.; Raleigh, C.B.; Wood, M.D.

    1972-01-01

    The San Andreas fault is viewed, according to the concepts of seafloor spreading and plate tectonics, as a transform fault that separates the Pacific and North American plates and along which relative movements of 2 to 6 cm/year have been taking place. The resulting strain can be released by creep, by earthquakes of moderate size, or (as near San Francisco and Los Angeles) by great earthquakes. Microearthquakes, as mapped by a dense seismograph network in central California, generally coincide with zones of the San Andreas fault system that are creeping. Microearthquakes are few and scattered in zones where elastic energy is being stored. Changes in the rate of strain, as recorded by tiltmeter arrays, have been observed before several earthquakes of about magnitude 4. Changes in fluid pressure may control timing of seismic activity and make it possible to control natural earthquakes by controlling variations in fluid pressure in fault zones. An experiment in earthquake control is underway at the Rangely oil field in Colorado, where the rates of fluid injection and withdrawal in experimental wells are being controlled. ?? 1972.

  9. Ground Motion Prediction Equation for Earthquakes in Oklahoma

    NASA Astrophysics Data System (ADS)

    Yenier, E.; Atkinson, G. M.; Baturan, D.

    2016-12-01

    A significant increase has been observed in seismic activity in Oklahoma, since 2010. Although it is difficult to categorize these earthquakes as natural or induced on an individual basis, most of them are believed to be related to changes in stress conditions due to large-scale wastewater injection in the region. The growing seismic activity has prompted reassessment of the earthquake hazard in Oklahoma and southern Kansas. Prediction of ground motions that may be produced by potential future events constitutes one of the key components in seismic hazard assessment. In this study, we develop a ground motion prediction equation (GMPE), using a rich earthquake dataset distributed over a wide area of Oklahoma. To this end, we use a "plug-and-play" generic GMPE that can be adjusted for use in any region by modifying a few key model parameters. We investigate the region-specific source and attenuation properties using recorded peak ground motions and response spectra. We determine stress parameters based on the observed spectral shape, and compare to those from naturally occurring earthquakes in central and eastern North America. We also examine the spatial and temporal variation of stress parameters to gain insights into the source characteristics of induced events in the region. We adjust the generic GMPE using the source and attenuation parameters and a calibration factor calculated from empirical data. The derived model can be used for prediction of ground motions in Oklahoma for a wide range of magnitudes and distances.

  10. Microseismic Network Performance Estimation: Comparing Predictions to an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Greig, Wesley; Ackerley, Nick

    2014-05-01

    The design of networks for monitoring induced seismicity is of critical importance as specific standards of performance are necessary. One of the difficulties involved in designing networks for monitoring induced seismicity is that it is difficult to determine whether or not the network meets these standards without first developing an earthquake catalog. We develop a tool that can assess two key measures of network performance without an earthquake catalog: location accuracy and magnitude of completeness. Site noise is measured either at existing seismic stations or as part of a noise survey. We then interpolate measured values to determine a noise map for the entire region. This information is combined with instrument noise for each station to accurately assess total ambient noise at each station. Location accuracy is evaluated according to the approach of Peters and Crosson (1972). Magnitude of completeness is computed by assuming isotropic radiation and mandating a threshold signal to noise ratio (similar to Stabile et al. 2013). We apply this tool to a seismic network in the central United States. We predict the magnitude of completeness and the location accuracy and compare predicted values with observed values generated from the existing earthquake catalog for the network. We investigate the effects of hypothetical station additions and removals to a network to simulate network expansions and station failures. We find that the addition of stations to areas of low noise results in significantly larger improvements in network performance than station additions to areas of elevated noise, particularly with respect to magnitude of completeness. Our results highlight the importance of site noise considerations in the design of a seismic network. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables the prediction of performance for a purely hypothetical seismic network. If near real

  11. Artificial neural network model for earthquake prediction with radon monitoring.

    PubMed

    Külahci, Fatih; Inceöz, Murat; Doğru, Mahmut; Aksoy, Ercan; Baykara, Oktay

    2009-01-01

    Apart from the linear monitoring studies concerning the relationship between radon and earthquake, an artificial neural networks (ANNs) model approach is presented starting out from non-linear changes of the eight different parameters during the earthquake occurrence. A three-layer Levenberg-Marquardt feedforward learning algorithm is used to model the earthquake prediction process in the East Anatolian Fault System (EAFS). The proposed ANN system employs individual training strategy with fixed-weight and supervised models leading to estimations. The average relative error between the magnitudes of the earthquakes acquired by ANN and measured data is about 2.3%. The relative error between the test and earthquake data varies between 0% and 12%. In addition, the factor analysis was applied on all data and the model output values to see the statistical variation. The total variance of 80.18% was explained with four factors by this analysis. Consequently, it can be concluded that ANN approach is a potential alternative to other models with complex mathematical operations.

  12. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  13. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  14. Stress drop in the sources of intermediate-magnitude earthquakes in northern Tien Shan

    NASA Astrophysics Data System (ADS)

    Sycheva, N. A.; Bogomolov, L. M.

    2014-05-01

    The paper is devoted to estimating the dynamical parameters of 14 earthquakes with intermediate magnitudes (energy class 11 to 14), which occurred in the Northern Tien Shan. For obtaining the estimates of these parameters, including the stress drop, which could be then applied in crustal stress reconstruction by the technique suggested by Yu.L. Rebetsky (Schmidt Institute of Physics of the Earth, Russian Academy of Sciences), we have improved the algorithms and programs for calculating the spectra of the seismograms. The updated products allow for the site responses and spectral transformations during the propagation of seismic waves through the medium (the effect of finite Q-factor). By applying the new approach to the analysis of seismograms recorded by the seismic KNET network, we calculated the radii of the sources (Brune radius), scalar seismic moment, and stress drop (release) for the studied 14 earthquakes. The analysis revealed a scatter in the source radii and stress drop even among the earthquakes that have almost identical energy classes. The stress drop by different earthquakes ranges from one to 75 bar. We have also determined the focal mechanisms and stress regime of the Earth's crust. It is worth noting that during the considered period, strong seismic events with energy class above 14 were absent within the segment covered by the KNET stations.

  15. Strong nonlinear dependence of the spectral amplification factors of deep Vrancea earthquakes magnitude

    NASA Astrophysics Data System (ADS)

    Marmureanu, Gheorghe; Ortanza Cioflan, Carmen; Marmureanu, Alexandru

    2010-05-01

    Nonlinear effects in ground motion during large earthquakes have long been a controversial issue between seismologists and geotechnical engineers. Aki wrote in 1993:"Nonlinear amplification at sediments sites appears to be more pervasive than seismologists used to think…Any attempt at seismic zonation must take into account the local site condition and this nonlinear amplification( Local site effects on weak and strong ground motion, Tectonophysics,218,93-111). In other words, the seismological detection of the nonlinear site effects requires a simultaneous understanding of the effects of earthquake source, propagation path and local geological site conditions. The difficulty for seismologists in demonstrating the nonlinear site effects has been due to the effect being overshadowed by the overall patterns of shock generation and path propagation. The researchers from National Institute for Earth Physics ,in order to make quantitative evidence of large nonlinear effects, introduced the spectral amplification factor (SAF) as ratio between maximum spectral absolute acceleration (Sa), relative velocity (Sv) , relative displacement (Sd) from response spectra for a fraction of critical damping at fundamental period and peak values of acceleration(a-max),velocity (v-max) and displacement (d-max),respectively, from processed strong motion record and pointed out that there is a strong nonlinear dependence on earthquake magnitude and site conditions.The spectral amplification factors(SAF) are finally computed for absolute accelerations at 5% fraction of critical damping (β=5%) in five seismic stations: Bucharest-INCERC(soft soils, quaternary layers with a total thickness of 800 m);Bucharest-Magurele (dense sand and loess on 350m); Cernavoda Nuclear Power Plant site (marl, loess, limestone on 270 m) Bacau(gravel and loess on 20m) and Iassy (loess, sand, clay, gravel on 60 m) for last strong and deep Vrancea earthquakes: March 4,1977 (MGR =7.2 and h=95 km);August 30

  16. Adjusting the M8 algorithm to earthquake prediction in the Iranian plateau

    NASA Astrophysics Data System (ADS)

    Mojarab, Masoud; Memarian, Hossein; Zare, Mehdi; Kossobokov, Vladimir

    2017-07-01

    Earthquake prediction is one of the challenging problems of seismology. The present study intended to setup a routine prediction of major earthquakes in the Iranian plateau using a modification of the intermediate-term middle-range algorithm M8, in which original version has demonstrated high performance in a real-time Global Test over the last two decades. An investigation of earthquake catalog covering the entire the Iranian plateau through 2012 has shown that a modification of the M8 algorithm, adjusted for a rather low level of earthquake occurrence reported in the region, is capable for targeting magnitude 7.5+ events. The occurrence of the April 16, 2013, M7.7 Saravan and the September 24, 2013, M7.7 Awaran earthquakes at the time of writing this paper (14 months before Saravan earthquake occurrence) confirmed the results of investigation and demonstrated the need for further studies in this region. Earlier tests, M8 application in all over the Iran, showed that the 2013 Saravan and Awaran earthquakes may precede a great earthquake with magnitude 8+ in Makran region. To verify this statement, the algorithm M8 was applied once again on an updated catalog to September 2013. The result indicated that although the study region recently experienced two magnitude 7.5+ earthquakes, it remains prone to a major earthquake. The present study confirms the applicability of M8 algorithm for predicting earthquakes in the Iranian plateau and establishes an opportunity for a routine monitoring of seismic activity aimed at prediction of the largest earthquakes that can play a significant role in mitigation of damages due to natural hazard.

  17. Adjusting the M8 algorithm to earthquake prediction in the Iranian plateau

    NASA Astrophysics Data System (ADS)

    Mojarab, Masoud; Memarian, Hossein; Zare, Mehdi; Kossobokov, Vladimir

    2017-03-01

    Earthquake prediction is one of the challenging problems of seismology. The present study intended to setup a routine prediction of major earthquakes in the Iranian plateau using a modification of the intermediate-term middle-range algorithm M8, in which original version has demonstrated high performance in a real-time Global Test over the last two decades. An investigation of earthquake catalog covering the entire the Iranian plateau through 2012 has shown that a modification of the M8 algorithm, adjusted for a rather low level of earthquake occurrence reported in the region, is capable for targeting magnitude 7.5+ events. The occurrence of the April 16, 2013, M7.7 Saravan and the September 24, 2013, M7.7 Awaran earthquakes at the time of writing this paper (14 months before Saravan earthquake occurrence) confirmed the results of investigation and demonstrated the need for further studies in this region. Earlier tests, M8 application in all over the Iran, showed that the 2013 Saravan and Awaran earthquakes may precede a great earthquake with magnitude 8+ in Makran region. To verify this statement, the algorithm M8 was applied once again on an updated catalog to September 2013. The result indicated that although the study region recently experienced two magnitude 7.5+ earthquakes, it remains prone to a major earthquake. The present study confirms the applicability of M8 algorithm for predicting earthquakes in the Iranian plateau and establishes an opportunity for a routine monitoring of seismic activity aimed at prediction of the largest earthquakes that can play a significant role in mitigation of damages due to natural hazard.

  18. An application of earthquake prediction algorithm M8 in eastern Anatolia at the approach of the 2011 Van earthquake

    NASA Astrophysics Data System (ADS)

    Mojarab, Masoud; Kossobokov, Vladimir; Memarian, Hossein; Zare, Mehdi

    2015-07-01

    On 23rd October 2011, an M7.3 earthquake near the Turkish city of Van, killed more than 600 people, injured over 4000, and left about 60,000 homeless. It demolished hundreds of buildings and caused great damages to thousand others in Van, Ercis, Muradiye, and Çaldıran. The earthquake's epicenter is located about 70 km from a preceding M7.3 earthquake that occurred in November 1976 and destroyed several villages near the Turkey-Iran border and killed thousands of people. This study, by means of retrospective application of the M8 algorithm, checks to see if the 2011 Van earthquake could have been predicted. The algorithm is based on pattern recognition of Times of Increased Probability (TIP) of a target earthquake from the transient seismic sequence at lower magnitude ranges in a Circle of Investigation (CI). Specifically, we applied a modified M8 algorithm adjusted to a rather low level of earthquake detection in the region following three different approaches to determine seismic transients. In the first approach, CI centers are distributed on intersections of morphostructural lineaments recognized as prone to magnitude 7 + earthquakes. In the second approach, centers of CIs are distributed on local extremes of the seismic density distribution, and in the third approach, CI centers were distributed uniformly on the nodes of a 1∘×1∘ grid. According to the results of the M8 algorithm application, the 2011 Van earthquake could have been predicted in any of the three approaches. We noted that it is possible to consider the intersection of TIPs instead of their union to improve the certainty of the prediction results. Our study confirms the applicability of a modified version of the M8 algorithm for predicting earthquakes at the Iranian-Turkish plateau, as well as for mitigation of damages in seismic events in which pattern recognition algorithms may play an important role.

  19. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  20. Empirical relations to convert magnitudes of the earthquake catalogue for the north western of Algeria

    NASA Astrophysics Data System (ADS)

    Belayadi, Ilyes; Bezzeghoud, Mourad; Fontiela, João; Nadji, Amansour

    2017-04-01

    North Algeria is one of the most seismically active regions on the western Mediterranean basin and it is related with the boundaries of the Eurasian and Nubian plates. We compiled an earthquake catalogue for the north western of Algeria, within the area -2°W-1°E and 34°N-37°N for the time span 1790 - 2016. To compile the earthquake catalogue we merge all available catalogues either national and international. Then we remove all duplicates and fake earthquakes. The lower level of the catalogue entries is set at M = 2.5. Nevertheless, the magnitudes reported on the catalogue are ML, Ms, Mb, Mw and macroseismic intensity. Thus, we develop new empirical relations to calculate the Mw from the different magnitudes and intensity suitable to the seismic hazard and geodynamic context of North Algeria. Acknowledgements: Ilyes Belayadi is funded entirely by the University of Oran 2 Mohamed Ben Ahmed (Algeria). This work is co-financed by the European Union through the European Regional Development Fund under COMPETE 2020 (Operational Program for Competitiveness and Internationalization) through the ICT project (UID / GEO / 04683/2013) under the reference POCI-01-0145 -FEDER-007690.

  1. Earthquake Magnitude: A Teaching Module for the Spreadsheets Across the Curriculum Initiative

    NASA Astrophysics Data System (ADS)

    Wetzel, L. R.; Vacher, H. L.

    2006-12-01

    Spreadsheets Across the Curriculum (SSAC) is a library of computer-based activities designed to reinforce or teach quantitative-literacy or mathematics concepts and skills in context. Each activity (called a "module" in the SSAC project) consists of a PowerPoint presentation with embedded Excel spreadsheets. Each module focuses on one or more problems for students to solve. Each student works through a presentation, thinks about the in-context problem, figures out how to solve it mathematically, and builds the spreadsheets to calculate and examine answers. The emphasis is on mathematical problem solving. The intention is for the in- context problems to span the entire range of subjects where quantitative thinking, number sense, and math non-anxiety are relevant. The self-contained modules aim to teach quantitative concepts and skills in a wide variety of disciplines (e.g., health care, finance, biology, and geology). For example, in the Earthquake Magnitude module students create spreadsheets and graphs to explore earthquake magnitude scales, wave amplitude, and energy release. In particular, students realize that earthquake magnitude scales are logarithmic. Because each step in magnitude represents a 10-fold increase in wave amplitude and approximately a 30-fold increase in energy release, large earthquakes are much more powerful than small earthquakes. The module has been used as laboratory and take-home exercises in small structural geology and solid earth geophysics courses with upper level undergraduates. Anonymous pre- and post-tests assessed students' familiarity with Excel as well as other quantitative skills. The SSAC library consists of 27 modules created by a community of educators who met for one-week "module-making workshops" in Olympia, Washington, in July of 2005 and 2006. The educators designed the modules at the workshops both to use in their own classrooms and to make available for others to adopt and adapt at other locations and in other classes

  2. Intermediate-term prediction in advance of the Loma Prieta earthquake

    SciTech Connect

    Keilis-Borok, V.I.; Kossobokov, V.; Rotvain, I. ); Knopoff, L. )

    1990-08-01

    The Loma Prieta earthquake of October 17, 1989 was predicted by the use of two pattern recognition algorithms, CN and M8. The prediction with algorithm CN was that an earthquake with magnitude greater than or equal to 6.4 was expected to occur in a roughly four year interval staring in midsummer 1986 in a polygonal spatial window of approximate average dimensions 600 {times} 450 km, encompassing Northern California and Northern Nevada. The prediction with algorithm M8 was that an earthquake with magnitude greater than or equal to 7.0 was expected to occur within 5 to 7 years after 1985, in a spatial window of approximate average dimensions 800 {times} 560 km. The predictions were communicated in advance of the earthquake. In previous, mainly retrospective applications of these algorithms, successful predictions occurred in about 80% of the cases.

  3. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  4. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  5. Physically based prediction of earthquake induced landsliding

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Meunier, Patrick; Hovius, Niels; Gorum, Tolga; Uchida, Taro

    2015-04-01

    Earthquakes are an important trigger of landslides and can contribute significantly to sedimentary or organic matter fluxes. We present a new physically based expression for the prediction of total area and volume of populations of earthquake-induced landslides. This model implements essential seismic processes, linking key parameters such as ground acceleration, fault size, earthquake source depth and seismic moment. To assess the model we have compiled and normalized a database of landslide inventories for 40 earthquakes. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about two thirds of the cases in our databases. This is a significant improvement on a previously published empirical expression based only on earthquake moment, even though the prediction of total landslide area is more difficult than that of volume because it is affected by additional parameters such as the depth and continuity of soil cover. Some outliers in terms of observed landslide intensity are likely to be associated with exceptional rock mass properties in the epicentral area. Others may be related to seismic source complexities ignored by the model. However, most cases in our catalogue seem to be relatively unaffected by these two effects despite the variety of lithologies and tectonic settings they cover. This makes the model suitable for integration into landscape evolution models, and application to the assessment of secondary hazards and risks associated with earthquakes.

  6. Real-Time Estimation of Earthquake Location, Magnitude and Rapid Shake map Computation for the Campania Region, Southern Italy

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Convertito, V.; de Matteis, R.; Iannaccone, G.; Lancieri, M.; Lomax, A.; Satriano, C.

    2005-12-01

    introducing an evolutionary strategy which is aimed at obtaining a more and more refined estimate of the maximum probability volume as the time goes on. The real time magnitude estimate will take advantage from the high spatial density of the network in the source region and the wide dynamic range of installed instruments. Based on the offline analysis of high quality strong-motion data bases recorded in Italy and worldwide, several methods will be checked and validated , using different observed quantities (peak amplitude, dominant frequency, square velocity integral, .) to be measured on seismograms, as a function of time. Following the ElarmS methodology (Allen,2004), peak ground attenuation relations can be used to predict the distribution of maximum ground shaking, as updated estimates of earthquake location and magnitude are progressively available from the Early Warning system starting from the time of first P-wave detection. As measurements of peak ground quantities for the current earthquake become available from the network, these values are progressively used to adjust an "ad hoc" determined attenuation relation for the Campania region using the stochastic approach proposed by Boore (1993).

  7. Triggered slip on the Calaveras Fault during the magnitude 7.1 Loma Prieta, California, Earthquake

    NASA Astrophysics Data System (ADS)

    McClellan, P. H.; Hay, E. A.

    After the magnitude (M) 7.1 Loma Prieta earthquake on the San Andreas fault we inspected selected sites along the Calaveras fault for evidence of recent surface displacement. In two areas along the Calaveras fault we documented recent right-lateral offsets of cultural features by at least 5 mm within zones of recognized historical creep. The areas are in the city of Hollister and at Highway 152 near San Felipe Lake, located approximately 25 km southeast and 18 km northeast, respectively, of the nearest part of the San Andreas rupture zone. On the basis of geologic evidence the times of the displacement events are constrained to within days or hours of the Loma Prieta mainshock. We conclude that this earthquake on the San Andreas fault triggered surface rupture along at least a 17-km-long segment of the Calaveras fault. These geologic observations extend evidence of triggered slip from instrument stations within this zone of Calaveras fault rupture.

  8. Oklahoma Area Struck By Magnitude 5.0 Earthquake Imaged by NASA Satellite

    NASA Image and Video Library

    2016-11-08

    On Sunday, Nov. 6, 2016, at 7:44 p.m. local time, a magnitude 5.0 earthquake struck near the town of Cushing, Oklahoma. Numerous buildings were damaged by the temblor, but only a few minor injuries were reported. Cushing is home to one of the world's largest oil storage terminals; no damage was reported to the petroleum facilities. A star marks the epicenter of the earthquake,which occurred at a depth of 3.1 miles (5 kilometers). The image was acquired April 28, 2011, covers an area of 7 by 9 miles (11.4 by 14.5 kilometers), and is located at 36 degrees north, 96.8 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA21099

  9. 76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 1-day meeting on April 16, 2011. The meeting... the Director of the U.S. Geological Survey on proposed earthquake predictions, on the completeness and...

  10. A radon detector for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Dacey, James

    2010-04-01

    Recent events in Haiti and Chile remind us of the devastation that can be wrought by an earthquake, especially when it strikes without warning. For centuries, people living in seismically active regions have reported a number of strange occurrences immediately prior to a quake, including unexpected weather phenomena and even unusual behaviour among animals. In more recent times, some scientists have suggested other precursors, such as sporadic bursts of electromagnetic radiation from the fault zone. Unfortunately, none of these suggestions has led to a robust, scientific method for earthquake prediction. Now, however, a group of physicists, led by physics Nobel laureate Georges Charpak, has developed a new detector that could measure one of the more testable earthquake precursors - the suggestion that radon gas is released from fault zones prior to earth slipping, writes James Dacey.

  11. Detecting precursory patterns to enhance earthquake prediction in Chile

    NASA Astrophysics Data System (ADS)

    Florido, E.; Martínez-Álvarez, F.; Morales-Esteban, A.; Reyes, J.; Aznarte-Mellado, J. L.

    2015-03-01

    The prediction of earthquakes is a task of utmost difficulty that has been widely addressed by using many different strategies, with no particular good results thus far. Seismic time series of the four most active Chilean zones, the country with largest seismic activity, are analyzed in this study in order to discover precursory patterns for large earthquakes. First, raw data are transformed by removing aftershocks and foreshocks, since the goal is to only predict main shocks. New attributes, based on the well-known b-value, are also generated. Later, these data are labeled, and consequently discretized, by the application of a clustering algorithm, following the suggestions found in recent literature. Earthquakes with magnitude larger than 4.4 are identified in the time series. Finally, the sequence of labels acting as precursory patterns for such earthquakes are searched for within the datasets. Results verging on 70% on average are reported, leading to conclude that the methodology proposed is suitable to be applied in other zones with similar seismicity.

  12. Multiscale multifractal detrended fluctuation analysis of earthquake magnitude series of Southern California

    NASA Astrophysics Data System (ADS)

    Fan, Xingxing; Lin, Min

    2017-08-01

    The multifractal characteristics of magnitude time series of earthquakes that occurred in Southern California from 1990 to 2010 are studied in this work. A method for the scale division of the magnitude of these earthquakes based on empirical mode decomposition (EMD) and multifractal analysis is proposed. This method gains a new insight into measuring multifractal properties of the magnitude time series at multiple scales, and it reveals further information about the dynamic seismic behavior. By using EMD, a time series can be decomposed into mode time series that represent different time-frequency components. We find that time-frequency components show long-range correlation with different Hurst exponents by using R / S analysis. Based on the different fractal structures of components, we consider three different scale series: Micro-, Mid- and Macro-scale subsequences, which are superposed and reconstructed by the components. The multifractal properties of the three scale subsequences are analyzed by using multifractal detrended fluctuation analysis (MF-DFA). The results show that the three different scale subsequences have various shapes of multifractal spectra and corresponding distinct properties. The Micro-scale subsequence singularity spectrum shows left-skewed, indicating a relative dominance of the lower Hurst exponent; the Mid-scale subsequence has a right-skewed singularity spectrum; the Macro-scale subsequence exhibits the most significant persistence and shows the strongest multifractality.

  13. Signals of ENPEMF Used in Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Hao, G.; Dong, H.; Zeng, Z.; Wu, G.; Zabrodin, S. M.

    2012-12-01

    The signals of Earth's natural pulse electromagnetic field (ENPEMF) is a combination of the abnormal crustal magnetic field pulse affected by the earthquake, the induced field of earth's endogenous magnetic field, the induced magnetic field of the exogenous variation magnetic field, geomagnetic pulsation disturbance and other energy coupling process between sun and earth. As an instantaneous disturbance of the variation field of natural geomagnetism, ENPEMF can be used to predict earthquakes. This theory was introduced by A.A Vorobyov, who expressed a hypothesis that pulses can arise not only in the atmosphere but within the Earth's crust due to processes of tectonic-to-electric energy conversion (Vorobyov, 1970; Vorobyov, 1979). The global field time scale of ENPEMF signals has specific stability. Although the wave curves may not overlap completely at different regions, the smoothed diurnal ENPEMF patterns always exhibit the same trend per month. The feature is a good reference for observing the abnormalities of the Earth's natural magnetic field in a specific region. The frequencies of the ENPEMF signals generally locate in kilo Hz range, where frequencies within 5-25 kilo Hz range can be applied to monitor earthquakes. In Wuhan, the best observation frequency is 14.5 kilo Hz. Two special devices are placed in accordance with the S-N and W-E direction. Dramatic variation from the comparison between the pulses waveform obtained from the instruments and the normal reference envelope diagram should indicate high possibility of earthquake. The proposed detection method of earthquake based on ENPEMF can improve the geodynamic monitoring effect and can enrich earthquake prediction methods. We suggest the prospective further researches are about on the exact sources composition of ENPEMF signals, the distinction between noise and useful signals, and the effect of the Earth's gravity tide and solid tidal wave. This method may also provide a promising application in

  14. New, low magnitude earthquake detections in Ireland and neighbouring offshore basins by waveform template matching

    NASA Astrophysics Data System (ADS)

    Arroucau, Pierre; Grannell, James; Lebedev, Sergei; Bean, Chris J.; Möllhoff, Martin; Blake, Tom; Horan, Clare

    2017-04-01

    Earthquake monitoring in intraplate continental interiors requires the detection of low magnitude events in regions that are sometimes poorly instrumented due to low estimated hazard and risk. According to existing catalogues, the seismic activity of Ireland is characterized by low magnitude, infrequent earthquakes. This is expected as Ireland is located several hundred kilometers away from the closest plate boundaries. However, the lack of seismic activity is still surprising in comparison with that of Great Britain, its closest neighbour. Since Ireland's historical seismic station coverage was significantly sparser than that of Great Britain, a possible instrumental bias has been invoked, but recent results obtained from the analysis of waveforms recorded at dense temporary arrays and new permanent stations tend to confirm the relative quiet seismogenic behaviour of Ireland's crust. However, classical detection methods are known to fail if site conditions are too noisy, hence very low magnitude events can still be missed. Such events are of primary importance for seismotectonic studies, so in this work we investigate the possibility of producing new detections by cross-correlating the available continuous waveform data with waveform templates from catalogue earthquakes. Preliminary results show that more than 200 new events can be identified over the past 5 years, which is particularly significant considering the 120 events present in the catalogue for the period 1980-2016. Despite the limitation of the technique to events whose location and source characteristics are close to previously known ones, these results demonstrate that waveform template cross-correlation can successfully be used to lower detection thresholds in a seismically quiet region such as Ireland.

  15. A local earthquake coda magnitude and its relation to duration, moment M sub O, and local Richter magnitude M sub L

    NASA Technical Reports Server (NTRS)

    Suteau, A. M.; Whitcomb, J. H.

    1977-01-01

    A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.

  16. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of

  17. The 2009 earthquake, magnitude mb 4.8, in the Pantanal Wetlands, west-central Brazil.

    PubMed

    Dias, Fábio L; Assumpção, Marcelo; Facincani, Edna M; França, George S; Assine, Mario L; Paranhos, Antônio C; Gamarra, Roberto M

    2016-09-01

    The main goal of this paper is to characterize the Coxim earthquake occurred in June 15th, 2009 in the Pantanal Basin and to discuss the relationship between its faulting mechanism with the Transbrasiliano Lineament. The earthquake had maximum intensity MM V causing damage in farm houses and was felt in several cities located around, including Campo Grande and Goiânia. The event had an mb 4.8 magnitude and depth was 6 km, i.e., it occurred in the upper crust, within the basement and 5 km below the Cenozoic sedimentary cover. The mechanism, a thrust fault mechanism with lateral motion, was obtained by P-wave first-motion polarities and confirmed by regional waveform modelling. The two nodal planes have orientations (strike/dip) of 300°/55° and 180°/55° and the orientation of the P-axis is approximately NE-SW. The results are similar to the Pantanal earthquake of 1964 with mb 5.4 and NE-SW compressional axis. Both events show that Pantanal Basin is a seismically active area, under compressional stress. The focal mechanism of the 1964 and 2009 events have no nodal plane that could be directly associated with the main SW-NE trending Transbrasiliano system indicating that a direct link of the Transbrasiliano with the seismicity in the Pantanal Basin is improbable.

  18. Is It Possible to Predict Strong Earthquakes?

    NASA Astrophysics Data System (ADS)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  19. Collective properties of injection-induced earthquake sequences: 2. Spatiotemporal evolution and magnitude frequency distributions

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny; Huang, Yihe

    2016-05-01

    Probabilistic seismic hazard assessment for induced seismicity depends on reliable estimates of the locations, rate, and magnitude frequency properties of earthquake sequences. The purpose of this paper is to investigate how variations in these properties emerge from interactions between an evolving fluid pressure distribution and the mechanics of rupture on heterogeneous faults. We use an earthquake sequence model, developed in the first part of this two-part series, that computes pore pressure evolution, hypocenter locations, and rupture lengths for earthquakes triggered on 1-D faults with spatially correlated shear stress. We first consider characteristic features that emerge from a range of generic injection scenarios and then focus on the 2010-2011 sequence of earthquakes linked to wastewater disposal into two wells near the towns of Guy and Greenbrier, Arkansas. Simulations indicate that one reason for an increase of the Gutenberg-Richter b value for induced earthquakes is the different rates of reduction of static and residual strength as fluid pressure rises. This promotes fault rupture at lower stress than equivalent tectonic events. Further, b value is shown to decrease with time (the induced seismicity analog of b value reduction toward the end of the seismic cycle) and to be higher on faults with lower initial shear stress. This suggests that faults in the same stress field that have different orientations, and therefore different levels of resolved shear stress, should exhibit seismicity with different b-values. A deficit of large-magnitude events is noted when injection occurs directly onto a fault and this is shown to depend on the geometry of the pressure plume. Finally, we develop models of the Guy-Greenbrier sequence that captures approximately the onset, rise and fall, and southwest migration of seismicity on the Guy-Greenbrier fault. Constrained by the migration rate, we estimate the permeability of a 10 m thick critically stressed basement

  20. On the earthquake predictability of fault interaction models

    PubMed Central

    Marzocchi, W; Melini, D

    2014-01-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643

  1. On the earthquake predictability of fault interaction models

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Melini, D.

    2014-12-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process.

  2. The local magnitude of the 18 October 1989 Santa Cruz Mountains earthquake is M sub L =6. 9

    SciTech Connect

    McNally, K.C.; Yellin, J.; Protti-Quesada, M.; Malavassi, E.; Schillinger, W.; Terdiman, R.; Zhang, Z. ); Simila, G. )

    1990-09-01

    It is critical that local magnitudes, M{sub L} (Richter, 1935), be carefully determined for large earthquakes. M{sub L} is the calibration standard for many catalogs of historic earthquakes upon which other magnitude scales and measures of strong ground shaking are based. Also, M{sub L} is measured in the period range of 1-10 Hz, the most relevant for engineering and emergency response applications. The earthquake catalogs constitute the basis for both pure and applied research on statistical properties of earthquakes and earthquake processes. Despite the fact that they are most important in terms of energy release only a few large earthquakes are contained in the catalogs, however, because they are relatively rare. The authors find that the local magnitude, M{sub L}, of the 18 October 1989 (U.T.) earthquake is 6.9, not 7.0-7.1 as has been reported. This value agrees with the moment magnitude, M{sub w}=6.9, found by Kanamori and Satake (1990).

  3. Active fault slip and potential large magnitude earthquakes within the stable Kazakh Platform (Central Kazakhstan)

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J.; Walker, R. T.; Abdrakhmatov, K.; Campbell, G.; Mukambayev, A.; Rhodes, E.; Rood, D. H.

    2016-12-01

    The Tien Shan mountains of Central Asia are characterized at the present day by abundant range-bounding E-W thrust faults, and several major NW-SE striking right-lateral faults, which cut across the stable Kazakh Platform terminating at (or within) the Tien Shan. The various E-W thrust faults are associated with significant seismicity over the last few hundred years. In sharp contrast, the NW-SE right-lateral faults are not associated with any major historical earthquakes, and thus it remains unclear if these Paleozoic structures have been reactivated during the Late Cenozoic. The Dzhalair-Naiman fault (DNF) is one such fault, and is comprised of several fault segments striking NW-SE across the Central Kazakh Platform over a distance of 600+ km. Unlike similar NW-SE right-lateral faults in the region (e.g. Talas-Fergana and Dzhungarian faults), the DNF is confined to the Kazakh Platform and does not penetrate into the Tien Shan. Regional GPS velocities indicate slow (<2 mm/yr) deformation rates north of the Tien Shan, and rare, deep earthquakes in the Platform suggest that Platform-interior faults, such as the DNF, may have the potential to generate infrequent very large magnitude earthquakes. We investigate the Chokpar segment of the DNF (60+ km long), which lies 60 km north of Bishkek. We use Quaternary dating techniques (IRSL and 10Be exposure dating) to date several abandoned and incised alluvial fans which are now right-laterally displaced across the fault. Stream channels are offset by 30+ m (measured from a stereo Pleiades DEM and GPS survey data), while the terraces through which they cut were abandoned in the Mid-to-Late Holocene, suggesting a relatively high slip rate over the Late Quaternary (higher than expected from regional GPS velocities). However, given the potential for the DNF to slip in very large infrequent earthquakes (with 10+ m coseismic displacements), our slip-rate calculations may also be subject to additional errors related to the low

  4. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  5. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-02-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  6. Spatial variations in the frequency-magnitude distribution of earthquakes at Mount Pinatubo volcano

    USGS Publications Warehouse

    Sanchez, J.J.; McNutt, S.R.; Power, J.A.; Wyss, M.

    2004-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is mapped in two and three dimensions at Mount Pinatubo, Philippines, to a depth of 14 km below the summit. We analyzed 1406 well-located earthquakes with magnitudes MD ???0.73, recorded from late June through August 1991, using the maximum likelihood method. We found that b-values are higher than normal (b = 1.0) and range between b = 1.0 and b = 1.8. The computed b-values are lower in the areas adjacent to and west-southwest of the vent, whereas two prominent regions of anomalously high b-values (b ??? 1.7) are resolved, one located 2 km northeast of the vent between 0 and 4 km depth and a second located 5 km southeast of the vent below 8 km depth. The statistical differences between selected regions of low and high b-values are established at the 99% confidence level. The high b-value anomalies are spatially well correlated with low-velocity anomalies derived from earlier P-wave travel-time tomography studies. Our dataset was not suitable for analyzing changes in b-values as a function of time. We infer that the high b-value anomalies around Mount Pinatubo are regions of increased crack density, and/or high pore pressure, related to the presence of nearby magma bodies.

  7. An Equivalent Moment Magnitude Earthquake Catalogue for Western Turkey and its Quantitative Properties

    NASA Astrophysics Data System (ADS)

    Leptokaropoulos, Konstantinos; Vasilios, Karakostas; Eleftheria, Papadimitriou; Aggeliki, Adamaki; Onur, Tan; Zumer, Pabuçcu

    2013-04-01

    Earthquake catalogues consist a basic product of seismology, resulting from complex procedures and suffering from natural and man-made errors. The accumulation of these problems over space and time lead to inhomogeneous catalogues which in turn lead to significant uncertainties in many kinds of analyses, such as seismicity rate evaluation and seismic hazard assessment. A major source of catalogue inhomogeneity is the variety of magnitude scales (i.e. Mw, mb, MS, ML, Md), reported from different institutions and sources. Therefore an effort is made in this study to compile a catalogue as homogenous as possible regarding the magnitude scale for the region of Western Turkey (26oE - 32oE longitude, 35oN - 43oN latitude), one of the most rapidly deforming regions worldwide with intense seismic activity, complex fault systems and frequent strong earthquakes. For this purpose we established new relationships to transform as many as possible available magnitudes into equivalent moment magnitude scale, M*w. These relations yielded by the application of the General Orthogonal Regression method and the statistical significance of the results was quantified. The final equivalent moment magnitude was evaluated by taking into consideration all the available magnitudes for which a relation was obtained and also a weight inversely proportional to their standard deviation. Once the catalogue was compiled the magnitude of completeness, Mc, was investigated in both space and time regime. The b-values and their accuracy were also calculated by the maximum likelihood estimate. The spatial and temporal constraints were selected in respect to seismicity recording level, since the state and evolution of the local and regional seismic networks are unknown. We modified and applied the Goodness of Fit test of Wiemer and Wyss (2000) in order to be more effective in datasets that are characterized by smaller sample size and higher Mcthresholds. The compiled catalogue and the Mcevaluation

  8. Magnitude Based Discrimination of Manmade Seismic Events From Naturally Occurring Earthquakes in Utah, USA

    NASA Astrophysics Data System (ADS)

    Koper, K. D.; Pechmann, J. C.; Burlacu, R.; Pankow, K. L.; Stein, J. R.; Hale, J. M.; Roberson, P.; McCarter, M. K.

    2016-12-01

    We investigate the feasibility of using the difference between local (ML) and coda duration (MC) magnitude as a means of discriminating manmade seismic events from naturally occurring tectonic earthquakes in and around Utah. Using a dataset of nearly 7,000 well-located earthquakes in the Utah region, we find that ML-MC is on average 0.44 magnitude units smaller for mining induced seismicity (MIS) than for tectonic seismicity (TS). MIS occurs within near-surface low-velocity layers that act as a waveguide and preferentially increase coda duration relative to peak amplitude, while the vast majority of TS occurs beneath the near-surface waveguide. A second dataset of more than 3,700 probable explosions in the Utah region also has significantly lower ML-MC values than TS, likely for the same reason as the MIS. These observations suggest that ML-MC, or related measures of peak amplitude versus signal duration, may be useful for discriminating small explosions from earthquakes at local-to-regional distances. ML and MC can be determined for small events with relatively few observations, hence an ML-MC discriminant can be effective in cases where moment tensor inversion is not possible because of low data quality or poorly known Green's functions. Furthermore, an ML-MC discriminant does not rely on the existence of the fast attenuating Rg phase at regional distances. ML-MC may provide a local-to-regional distance extension of the mb-MS discriminant that has traditionally been effective at identifying large nuclear explosions with teleseismic data. This topic is of growing interest in forensic seismology, in part because the Comprehensive Nuclear Test Ban Treaty (CTBT) is a zero tolerance treaty that prohibits all nuclear explosions, no matter how small. If the CTBT were to come into force, source discrimination at local distances would be required to verify compliance.

  9. Improving the Level of Seismic Hazard Parameters in Saudi Arabia Using Earthquake Location and Magnitude Calibration

    NASA Astrophysics Data System (ADS)

    Al-Amri, A. M.; Rodgers, A. J.

    2004-05-01

    Saudi Arabia is an area, which is characterized very poorly seismically and for which little existing data is available. While for the most parts, particularly, Arabian Shield and Arabian Platform are aseismic, the area is ringed with regional seismic sources in the tectonically active areas of Iran and Turkey to the northeast, the Red Sea Rift bordering the Shield to the southwest, and the Dead Sea Transform fault zone to the north. Therefore, this paper aims to improve the level of seismic hazard parameters by improving earthquake location and magnitude estimates with the Saudi Arabian National Digital Seismic Network (SANDSN). We analyzed earthquake data, travel times and seismic waveform data from the SANDSN. KACST operates the 38 station SANDSN, consisting of 27 broadband and 11 short-period stations. The SANDSN has good signal detection capabilities because the sites are relatively quiet. Noise surveys at a few stations indicate that seismic noise levels at SANDSN stations are quite low for frequencies between 0.1 and 1.0 Hz, however cultural noise appears to affect some stations at frequencies above 1.0 Hz. Locations of regional earthquakes estimated by KACST were compared with locations from global bulletins. Large differences between KACST and global catalog locations are likely the result of inadequacies of the global average earth model (iasp91) used by the KACST system. While this model is probably adequate for locating distant (teleseismic) events in continental regions, it leads to large location errors, as much as 50-100 km, for regional events. We present detailed analysis of some events and Dead Sea explosions where we found gross errors in estimated locations. Velocity models are presented that should improve estimated locations of regional events in three specific regions: 1. Gulf of Aqabah - Dead Sea region 2. Arabian Shield and 3. Arabian Platform. Recently, these models are applied to the SANDSN to improve local and teleseismic event locations

  10. 77 FR 53225 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: Department of the... National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\ day meeting on September 17 and 18, 2012, at the U.S. Geological Survey National Earthquake Information Center (NEIC), 1711...

  11. A physical interpretation of the Haicheng earthquake prediction.

    PubMed

    Scholz, C H

    1977-05-12

    A possible explanation for the successful prediction of the 1975 Haicheng earthquake is that the earthquake was triggered by a deformation front that propagated 1,000 km through NE China at a velocity of about 110 km yr(-1). The various phenomena that were used to predict the earthquake can be explained by the deformation front.

  12. Local geodetic and seismic energy balance for shallow earthquake prediction

    NASA Astrophysics Data System (ADS)

    Cannavó, Flavio; Arena, Alessandra; Monaco, Carmelo

    2015-01-01

    Earthquake analysis for prediction purposes is a delicate and still open problem largely debated among scientists. In this work, we want to show that a successful time-predictable model is possible if based on large instrumental data from dense monitoring networks. To this aim, we propose a new simple data-driven and quantitative methodology which takes into account the accumulated geodetic strain and the seismically-released strain to calculate a balance of energies. The proposed index quantifies the state of energy of the selected area and allows us to evaluate better the ingoing potential seismic risk, giving a new tool to read recurrence of small-scale and shallow earthquakes. In spite of its intrinsic simple formulation, the application of the methodology has been successfully simulated in the Eastern flank of Mt. Etna (Italy) by tuning it in the period 2007-2011 and testing it in the period 2012-2013, allowing us to predict, within days, the earthquakes with highest magnitude.

  13. A Comprehensive Mathematical Model for the Correlation of Earthquake Magnitude with Geochemical Measurements. A Case Study: the Nisyros Volcano in Greece

    SciTech Connect

    Verros, G. D.; Latsos, T.; Liolios, C.; Anagnostou, K. E.

    2009-08-13

    A comprehensive mathematical model for the correlation of geological phenomena such as earthquake magnitude with geochemical measurements is presented in this work. This model is validated against measurements, well established in the literature, of {sup 220}Rn/{sup 222}Rn in the fumarolic gases of the Nisyros Island, Aegean Sea, Greece. It is believed that this model may be further used to develop a generalized methodology for the prediction of geological phenomena such as earthquakes and volcanic eruptions in the vicinity of the Nisyros Island.

  14. Maximum Magnitude and Probabilities of Induced Earthquakes in California Geothermal Fields: Applications for a Science-Based Decision Framework

    NASA Astrophysics Data System (ADS)

    Weiser, Deborah Anne

    Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.

  15. Magnitudes and Moment-Duration Scaling of Low-Frequency Earthquakes Beneath Southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A.; Rubin, A. M.; Savard, G.; Chuang, L. Y.

    2015-12-01

    We employ 130 low-frequency-earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from 100's to 1000's of individual LFEs, representing over 300,000 independent detections from major episodic-tremor-and- slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P- and S-waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatio-temporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free-surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single ETS template. The spatio-temporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 hours of LFE activity during an ETS episode when tidal sensitity is low. The remainder is released in bursts over several days, particularly as spatially extensive RTRs, during which tidal sensitivity is high. RTR's are characterized by large magnitude LFEs, and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power-law than exponential distributions although they exhibit very high b-values ≥ 6. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges MW<1.5, MW≥ 2.0. LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in dimension and that moment variation is dominated by slip. This behaviour implies

  16. Estimating Seismic Hazards from the Catalog of Taiwan Earthquakes from 1900 to 2014 in Terms of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Chen, Kuei-Pao; Chang, Wen-Yen

    2017-02-01

    Maximum expected earthquake magnitude is an important parameter when designing mitigation measures for seismic hazards. This study calculated the maximum magnitude of potential earthquakes for each cell in a 0.1° × 0.1° grid of Taiwan. Two zones vulnerable to maximum magnitudes of M w ≥6.0, which will cause extensive building damage, were identified: one extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan; the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values, which are consistent with high peak ground shaking. We also employed an innovative method to calculate (at intervals of M w 0.5) the bounds and median of recurrence time for earthquakes of magnitude M w 6.0-8.0 in Taiwan.

  17. Rapid magnitude estimation using τC method for earthquake early warning system (Case study in Sumatra)

    NASA Astrophysics Data System (ADS)

    Rahman, Aditya; Marsono, Agus; Rudyanto, Ariska

    2017-07-01

    Sumatra has three sources of earthquakes, these are subduction zone, Sumatra fault system and outer arc faults which are very closed to settlements so these pose serious threat to human lives also properties. Earthquake early warning system should be developed for mitigation. This study aims to developing earthquake early warning system by way of estimating the magnitude before the arrival of S waves. The magnitude is estimated from relationship between τc parameter and magnitude. Strong ground motion record were integrated twice to get the displacement record with high-pass Butterworth filter applied. τc was determined from ratio of displacement and velocity of vertical component record. τc reflects the size of an earthquake from initial portion of the P waves. τc method could generate magnitude estimation with 0.71 deviation value from actual size impending earthquake, with 0,5 of corner frequency and needed 21,1 second before the arrival of S waves. This method validated with the existing earthquake catalog

  18. tau_p^{max} magnitude estimation, the case of the April 6, 2009 L'Aquila earthquake

    NASA Astrophysics Data System (ADS)

    Olivieri, Marco

    2013-04-01

    Rapid magnitude estimate procedures represent a crucial part of proposed earthquake early warning systems. Most of these estimates are focused on the first part of the P-wave train, the earlier and less destructive part of the ground motion that follows an earthquake. Allen and Kanamori (Science 300:786-789, 2003) proposed to use the predominant period of the P-wave to determine the magnitude of a large earthquake at local distance and Olivieri et al. (Bull Seismol Soc Am 185:74-81, 2008) calibrated a specific relation for the Italian region. The Mw 6.3 earthquake hit Central Italy on April 6, 2009 and the largest aftershocks provide a useful dataset to validate the proposed relation and discuss the risks connected to the extrapolation of magnitude relations with a poor dataset of large earthquake waveforms. A large discrepancy between local magnitude (ML) estimated by means of tau_p^{max} evaluation and standard ML (6.8 ± 1.5 vs. 5.9 ± 0.4) suggests using caution when ML vs. tau_p^{max} calibrations do not include a relevant dataset of large earthquakes. Effects from large residuals could be mitigated or removed introducing selection rules on τ p function, by regionalizing the ML vs. tau_p^{max} function in the presence of significant tectonic or geological heterogeneity, and using probabilistic and evolutionary methods.

  19. Persistency of rupture directivity in moderate-magnitude earthquakes in Italy: Implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Rovelli, A.; Calderoni, G.

    2012-12-01

    A simple method based on the EGF deconvolution in the frequency domain is applied to detect the occurrence of unilateral ruptures in recent damaging earthquakes in Italy. The spectral ratio between event pairs with different magnitudes at individual stations shows large azimuthal variations above corner frequency when the target event is affected by source directivity and the EGF is not or vice versa. The analysis is applied to seismograms and accelerograms recorded during the seismic sequence following the 20 May 2012, Mw 5.6 main shock in Emilia, northern Italy, the 6 April 2009, Mw 6.1 earthquake of L'Aquila, central Italy, and the 26 September 1997, Mw 5.7 and 6.0 shocks in Umbria-Marche, central Italy. Events of each seismic sequence are selected as having consistent focal mechanisms, and the station selection obeys to the constraint of a similar source-to-receiver path for the event pairs. The analyzed data set of L'Aquila consists of 962 broad-band seismograms relative to 69 normal-faulting earthquakes (3.3 ≤ MW ≤ 6.1, according to Herrmann et al., 2011), stations are selected in the distance range 100 to 250 km to minimize differences in propagation paths. The seismogram analysis reveals that a strong along-strike (toward SE) source directivity characterized all of the three Mw > 5.0 shocks. Source directivity was also persistent up to the smallest magnitudes: 65% of earthquakes under study showed evidence of directivity toward SE whereas only one (Mw 3.7) event showed directivity in the opposite direction. Also the Mw 5.6 main shock of the 20 May 2012 in Emilia result in large azimuthal spectral variations indicating unilateral rupture propagation toward SE. According to the reconstructed geometry of the trust-fault plane, the inferred directivity direction suggests top-down rupture propagation. The analysis over the Emilia aftershock sequence is in progress. The third seismic sequence, dated 1997-1998, occurred in the northern Apennines and, similarly

  20. Spatial variations in the frequency-magnitude distribution of earthquakes in the southwestern Okinawa Trough

    NASA Astrophysics Data System (ADS)

    Lin, J.-Y.; Sibuet, J.-C.; Lee, C.-S.; Hsu, S.-K.; Klingelhoefer, F.

    2007-04-01

    The relations between the frequency of occurrence and the magnitude of earthquakes are established in the southern Okinawa Trough for 2823 relocated earthquakes recorded during a passive ocean bottom seismometer experiment. Three high b-values areas are identified: (1) for an area offshore of the Ilan Plain, south of the andesitic Kueishantao Island from a depth of 50 km to the surface, thereby confirming the subduction component of the island andesites; (2) for a body lying along the 123.3°E meridian at depths ranging from 0 to 50 km that may reflect the high temperature inflow rising up from a slab tear; (3) for a third cylindrical body about 15 km in diameter beneath the Cross Backarc Volcanic Trail, at depths ranging from 0 to 15 km. This anomaly might be related to the presence of a magma chamber at the base of the crust already evidenced by tomographic and geochemical results. The high b-values are generally linked to magmatic and geothermal activities, although most of the seismicity is linked to normal faulting processes in the southern Okinawa Trough.

  1. Predicting Ground Motion from Induced Earthquakes in Geothermal Areas

    NASA Astrophysics Data System (ADS)

    Douglas, J.; Edwards, B.; Convertito, V.; Sharma, N.; Tramelli, A.; Kraaijpoel, D.; Cabrera, B. M.; Maercklin, N.; Troise, C.

    2013-06-01

    Induced seismicity from anthropogenic sources can be a significant nuisance to a local population and in extreme cases lead to damage to vulnerable structures. One type of induced seismicity of particular recent concern, which, in some cases, can limit development of a potentially important clean energy source, is that associated with geothermal power production. A key requirement for the accurate assessment of seismic hazard (and risk) is a ground-motion prediction equation (GMPE) that predicts the level of earthquake shaking (in terms of, for example, peak ground acceleration) of an earthquake of a certain magnitude at a particular distance. Few such models currently exist in regard to geothermal-related seismicity, and consequently the evaluation of seismic hazard in the vicinity of geothermal power plants is associated with high uncertainty. Various ground-motion datasets of induced and natural seismicity (from Basel, Geysers, Hengill, Roswinkel, Soultz, and Voerendaal) were compiled and processed, and moment magnitudes for all events were recomputed homogeneously. These data are used to show that ground motions from induced and natural earthquakes cannot be statistically distinguished. Empirical GMPEs are derived from these data; and, although they have similar characteristics to recent GMPEs for natural and mining-related seismicity, the standard deviations are higher. To account for epistemic uncertainties, stochastic models subsequently are developed based on a single corner frequency and with parameters constrained by the available data. Predicted ground motions from these models are fitted with functional forms to obtain easy-to-use GMPEs. These are associated with standard deviations derived from the empirical data to characterize aleatory variability. As an example, we demonstrate the potential use of these models using data from Campi Flegrei.

  2. Earthquake Prediction in a Big Data World

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2016-12-01

    The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquake prediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of Earthquake Prediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance

  3. Reexamination of the magnitudes for the 1906 and 1922 Chilean earthquakes using Japanese tsunami amplitudes: Implications for source depth constraints

    NASA Astrophysics Data System (ADS)

    Carvajal, M.; Cisternas, M.; Gubler, A.; Catalán, P. A.; Winckler, P.; Wesson, R. L.

    2017-01-01

    Far-field tsunami records from the Japanese tide gauge network allow the reexamination of the moment magnitudes (Mw) for the 1906 and 1922 Chilean earthquakes, which to date rely on limited information mainly from seismological observations alone. Tide gauges along the Japanese coast provide extensive records of tsunamis triggered by six great (Mw >8) Chilean earthquakes with instrumentally determined moment magnitudes. These tsunami records are used to explore the dependence of tsunami amplitudes in Japan on the parent earthquake magnitude of Chilean origin. Using the resulting regression parameters together with tide gauge amplitudes measured in Japan we estimate apparent moment magnitudes of Mw 8.0-8.2 and Mw 8.5-8.6 for the 1906 central and 1922 north-central Chile earthquakes. The large discrepancy of the 1906 magnitude estimated from the tsunami observed in Japan as compared with those previously determined from seismic waves (Ms 8.4) suggests a deeper than average source with reduced tsunami excitation. A deep dislocation along the Chilean megathrust would favor uplift of the coast rather than beneath the sea, giving rise to a smaller tsunami and producing effects consistent with those observed in 1906. The 1922 magnitude inferred from far-field tsunami amplitudes appear to better explain the large extent of damage and the destructive tsunami that were locally observed following the earthquake than the lower seismic magnitudes (Ms 8.3) that were likely affected by the well-known saturation effects. Thus, a repeat of the large 1922 earthquake poses seismic and tsunami hazards in a region identified as a mature seismic gap.

  4. 76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey. ACTION: Notice of Meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... Government. The Council shall advise the Director of the U.S. Geological Survey on proposed earthquake...

  5. An evaluation of the seismic- window theory for earthquake prediction.

    USGS Publications Warehouse

    McNutt, M.; Heaton, T.H.

    1981-01-01

    Reports studies designed to determine whether earthquakes in the San Francisco Bay area respond to a fortnightly fluctuation in tidal amplitude. It does not appear that the tide is capable of triggering earthquakes, and in particular the seismic window theory fails as a relevant method of earthquake prediction. -J.Clayton

  6. Moment magnitude, local magnitude and corner frequency of small earthquakes nucleating along a low angle normal fault in the Upper Tiber valley (Italy)

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Chiaraluce, L.; Valoroso, L.

    2015-12-01

    The relation between moment magnitude (MW) and local magnitude (ML) is still a debated issue (Bath, 1966, 1981; Ristau et al., 2003, 2005). Theoretical considerations and empirical observations show that, in the magnitude range between 3 and 5, MW and ML scale 1∶1. Whilst for smaller magnitudes this 1∶1 scaling breaks down (Bethmann et al. 2011). For accomplishing this task we analyzed the source parameters of about 1500 (30.000 waveforms) well-located small earthquakes occurred in the Upper Tiber Valley (Northern Apennines) in the range of -1.5≤ML≤3.8. In between these earthquakes there are 300 events repeatedly rupturing the same fault patch generally twice within a short time interval (less than 24 hours; Chiaraluce et al., 2007). We use high-resolution short period and broadband recordings acquired between 2010 and 2014 by 50 permanent seismic stations deployed to monitor the activity of a regional low angle normal fault (named Alto Tiberina fault, ATF) in the framework of The Alto Tiberina Near Fault Observatory project (TABOO; Chiaraluce et al., 2014). For this study the direct determination of MW for small earthquakes is essential but unfortunately the computation of MW for small earthquakes (MW < 3) is not a routine procedure in seismology. We apply the contributions of source, site, and crustal attenuation computed for this area in order to obtain precise spectral corrections to be used in the calculation of small earthquakes spectral plateaus. The aim of this analysis is to achieve moment magnitudes of small events through a procedure that uses our previously calibrated crustal attenuation parameters (geometrical spreading g(r), quality factor Q(f), and the residual parameter k) to correct for path effects. We determine the MW-ML relationships in two selected fault zones (on-fault and fault-hanging-wall) of the ATF by an orthogonal regression analysis providing a semi-automatic and robust procedure for moment magnitude determination within a

  7. Field survey of earthquake effects from the magnitude 4.0 southern Maine earthquake of October 16, 2012

    USGS Publications Warehouse

    Amy L. Radakovich,; Alex J. Fergusen,; Boatwright, John

    2016-06-02

    The magnitude 4.0 earthquake that occurred on October 16, 2012, near Hollis Center and Waterboro in southwestern Maine surprised and startled local residents but caused only minor damage. A two-person U.S. Geological Survey (USGS) team was sent to Maine to conduct an intensity survey and document the damage. The only damage we observed was the failure of a chimney and plaster cracks in two buildings in East and North Waterboro, 6 kilometers (km) west of the epicenter. We photographed the damage and interviewed residents to determine the intensity distribution in the epicentral area. The damage and shaking reports are consistent with a maximum Modified Mercalli Intensity (MMI) of 5–6 for an area 1–8 km west of the epicenter, slightly higher than the maximum Community Decimal Intensity (CDI) of 5 determined by the USGS “Did You Feel It?” Web site. The area of strong shaking in East Waterboro corresponds to updip rupture on a fault plane that dips steeply east. 

  8. Prediction of Earthquakes by Lunar Cicles

    NASA Astrophysics Data System (ADS)

    Rodriguez, G.

    2007-05-01

    Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme

  9. Generalized multidimensional earthquake frequency distributions consistent with Non-Extensive Statistical Physics: An appraisal of the universality in the interdependence of magnitude, interevent time and interevent distance

    NASA Astrophysics Data System (ADS)

    Tzanis, Andreas; Vallianatos, Philippos; Efstathiou, Angeliki

    2013-04-01

    It is well known that earthquake frequency is related to earthquake magnitude via a simple linear relationship of the form logN = a - bM, where N is the number of earthquakes in a specified time interval; this is the famous Gutenberg - Richter (G-R) law. The generally accepted interpretation of the G-R law is that it expresses the statistical behaviour of a fractal active tectonic grain (active faulting). The relationship between the constant b and the fractal dimension of the tectonic grain has been demonstrated in various ways. The story told by the G-R law is, nevertheless, incomplete. It is now accepted that the active tectonic grain comprises a critical complex system, although it hasn't yet been established whether it is stationary (Self-Organized Critical), evolutionary (Self-Organizing Critical), or a time-varying blend of both. At any rate, critical systems are characterized by complexity and strong interactions between near and distant neighbours. This, in turn, implies that the self-organization of earthquake occurrence should be manifested by certain statistical behaviour of its temporal and spatial dependence. A strong line of evidence suggests that G-R law is a limiting case of a more general frequency-magnitude distribution, which is properly expressed in terms of Non-Extensive Statistical Physics (NESP) on the basis of the Tsallis entropy; this is a context natural and particularly suitable for the description of complex systems. A measure of temporal dependence in earthquake occurrence is the time lapsed between consecutive events above a magnitude threshold over a given area (interevent time). A corresponding measure of spatial dependence is the hypocentral distance between consecutive events above a magnitude threshold over a given area (interevent distance). The statistics of earthquake frequency vs. interevent time have been studied by several researchers and have been shown to comply with the predictions of the NESP formalism. There's also

  10. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as

  11. Material contrast does not predict earthquake rupture propagation direction

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    2005-01-01

    Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

  12. A regional surface wave magnitude scale for the earthquakes of Russia's Far East

    NASA Astrophysics Data System (ADS)

    Chubarova, O. S.; Gusev, A. A.

    2017-01-01

    The modified scale M s(20R) is developed for the magnitude classification of the earthquakes of Russia's Far East based on the surface wave amplitudes at regional distances. It extends the applicability of the classical Gutenberg scale M s(20) towards small epicentral distances (0.7°-20°). The magnitude is determined from the amplitude of the signal that is preliminarily bandpassed to extract the components with periods close to 20 s. The amplitude is measured either for the surface waves or, at fairly short distances of 0.7°-3°, for the inseparable wave group of the surface and shear waves. The main difference of the M s(20R) scale with the traditional M s(BB) Soloviev-Vanek scale is its firm spectral anchoring. This approach practically eliminated the problem of the significant (up to-0.5) regional and station anomalies characteristic of the M s(BB) scale in the conditions of the Far East. The absence of significant station and regional anomalies, as well as the strict spectral anchoring, make the M s(20R) scale advantageous when used for prompt decision making in tsunami warnings for the coasts of Russia's Far East.

  13. Magnitude Uncertainty and Ground Motion Simulations of the 1811-1812 New Madrid Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Graves, R. W.; Olsen, K. B.; Boyd, O. S.; Hartzell, S.; Ni, S.; Somerville, P. G.; Williams, R. A.; Zhong, J.

    2011-12-01

    We present a study of a set of three-dimensional earthquake simulation scenarios in the New Madrid Seismic Zone (NMSZ). This is a collaboration among three simulation groups with different numerical modeling approaches and computational capabilities. The study area covers a portion of the Central United States (~400,000 km2) centered on the New Madrid seismic zone, which includes several metropolitan areas such as Memphis, TN and St. Louis, MO. We computed synthetic seismograms to a frequency of 1Hz by using a regional 3D velocity model (Ramirez-Guzman et al., 2010), two different kinematic source generation approaches (Graves et al., 2010; Liu et al., 2006) and one methodology where sources were generated using dynamic rupture simulations (Olsen et al., 2009). The set of 21 hypothetical earthquakes included different magnitudes (Mw 7, 7.6 and 7.7) and epicenters for two faults associated with the seismicity trends in the NMSZ: the Axial (Cottonwood Grove) and the Reelfoot faults. Broad band synthetic seismograms were generated by combining high frequency synthetics computed in a one-dimensional velocity model with the low frequency motions at a crossover frequency of 1 Hz. Our analysis indicates that about 3 to 6 million people living near the fault ruptures would experience Mercalli intensities from VI to VIII if events similar to those of the early nineteenth century occurred today. In addition, the analysis demonstrates the importance of 3D geologic structures, such as the Reelfoot Rift and the Mississippi Embayment, which can channel and focus the radiated wave energy, and rupture directivity effects, which can strongly amplify motions in the forward direction of the ruptures. Both of these effects have a significant impact on the pattern and level of the simulated intensities, which suggests an increased uncertainty in the magnitude estimates of the 1811-1812 sequence based only on historic intensity reports. We conclude that additional constraints such as

  14. Global correlations between maximum magnitudes of subduction zone interface thrust earthquakes and physical parameters of subduction zones

    NASA Astrophysics Data System (ADS)

    Schellart, W. P.; Rawlinson, N.

    2013-12-01

    The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g. Mariana, Scotia). Here we show how such variability might depend on various subduction zone parameters. We present 24 physical parameters that characterize these subduction zones in terms of their geometry, kinematics, geology and dynamics. We have investigated correlations between these parameters and the maximum recorded moment magnitude (MW) for subduction zone segments in the period 1900-June 2012. The investigations were done for one dataset using a geological subduction zone segmentation (44 segments) and for two datasets (rupture zone dataset and epicenter dataset) using a 200 km segmentation (241 segments). All linear correlations for the rupture zone dataset and the epicenter dataset (|R| = 0.00-0.30) and for the geological dataset (|R| = 0.02-0.51) are negligible-low, indicating that even for the highest correlation the best-fit regression line can only explain 26% of the variance. A comparative investigation of the observed ranges of the physical parameters for subduction segments with MW > 8.5 and the observed ranges for all subduction segments gives more useful insight into the spatial distribution of giant subduction thrust earthquakes. For segments with MW > 8.5 distinct (narrow) ranges are observed for several parameters, most notably the trench-normal overriding plate deformation rate (vOPD⊥, i.e. the relative velocity between forearc and stable far-field backarc), trench-normal absolute trench rollback velocity (vT⊥), subduction partitioning ratio (vSP⊥/vS⊥, the fraction of the subduction velocity that is accommodated by subducting plate motion), subduction thrust dip angle (δST), subduction thrust curvature (CST), and trench curvature angle (

  15. Dependence of b-value on Depth, Co-Seismic Slip, and Time for Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Aiken, J. M.; Uchide, T.; Schorlemmer, D.

    2016-12-01

    Spatial and temporal variations in parameters of earthquake sources and seismicity will be a key to understand the status of faults such as applied stress and fault strength and hence the potential of earthquake occurrence. Recent studies have shown that the b-value of the Gutenberg-Richter relationship decreases before large magnitude events for timescales of five years or more. In addition, large co-seismic slip values have been found to occur in extremely low b-value regions, e.g., the 2011 Tohoku-oki earthquake [Nanjo, et. al. 2012]. A comprehensive understanding requires us to explore the relationship between depth, co-seismic slip of a large earthquake, time, and b-value for large magnitude events occurring in different tectonic environments (oceanic and in-land). Here we present the relationship between co-seismic slip and b-values prior to several large magnitude events in Japan, e.g., the 2003 M8.3 Tokachi-oki earthquake near Hokkaido, using the Japan Meteorological Agency (JMA) earthquake catalog. We calculated the b-values at three-dimensionally distributed grids using the maximum likelihood estimation method and the reported JMA magnitude of completeness. We examined the cross section of depth, co-seismic slip, and time with the calculated b-values to investigate whether a common behavior exists prior to large events by comparing our results to the case of the 2011 Tohoku-oki earthquake, and consequently study what causes the change in b-value.

  16. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  17. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  18. A Study of Low-Frequency Earthquake Magnitudes in Northern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Chuang, L. Y.; Bostock, M. G.

    2015-12-01

    Tectonic tremor and low frequency earthquakes (LFE) have been extensively studied in recent years in northern Washington and southern Vancouver Island (VI). However, far less attention has been directed to northern VI where the behavior of tremor and LFEs is less well documented. We investigate LFE properties in this latter region by assembling templates using data from the POLARIS-NVI and Sea-JADE experiments. The POLARIS-NVI experiment comprised 27 broadband seismometers arranged along two mutually perpendicular arms with an aperture of ~60 km centered near station WOS (lat. 50.16, lon. -126.57). It recorded two ETS events in June 2006 and May 2007, each with duration less than a week. For these two episodes, we constructed 68 independent, high signal to noise ratio LFE templates representing spatially distinct asperities on the plate boundary in NVI, along with a catalogue of more than 30 thousand detections. A second data set is being prepared for the complementary 2014 Sea-JADE data set. The precisely located LFE templates represent simple direct P-waves and S-waves at many stations thereby enabling magnitude estimation of individual detections. After correcting for radiation pattern, 1-D geometrical spreading, attenuation and free-surface magnification, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single LFE template. LFE magnitudes range up to 2.54, and like southern VI are characterized by high b-values (b~8). In addition, we will quantify LFE moment-duration scaling and compare with southern Vancouver Island where LFE moments appear to be controlled by slip, largely independent of fault area.

  19. Stigma in science: the case of earthquake prediction.

    PubMed

    Joffe, Helene; Rossetto, Tiziana; Bradley, Caroline; O'Connor, Cliodhna

    2017-05-17

    This paper explores how earthquake scientists conceptualise earthquake prediction, particularly given the conviction of six earthquake scientists for manslaughter (subsequently overturned) on 22 October 2012 for having given inappropriate advice to the public prior to the L'Aquila earthquake of 6 April 2009. In the first study of its kind, semi-structured interviews were conducted with 17 earthquake scientists and the transcribed interviews were analysed thematically. The scientists primarily denigrated earthquake prediction, showing strong emotive responses and distancing themselves from earthquake 'prediction' in favour of 'forecasting'. Earthquake prediction was regarded as impossible and harmful. The stigmatisation of the subject is discussed in the light of research on boundary work and stigma in science. The evaluation reveals how mitigation becomes the more favoured endeavour, creating a normative environment that disadvantages those who continue to pursue earthquake prediction research. Recommendations are made for communication with the public on earthquake risk, with a focus on how scientists portray uncertainty. 2017 The Author(s). Disasters © Overseas Development Institute, 2017.

  20. Seismic response of the katmai volcanoes to the 6 December 1999 magnitude 7.0 Karluk Lake earthquake, Alaska

    USGS Publications Warehouse

    Power, J.A.; Moran, S.C.; McNutt, S.R.; Stihler, S.D.; Sanchez, J.J.

    2001-01-01

    A sudden increase in earthquake activity was observed beneath volcanoes in the Katmai area on the Alaska Peninsula immediately following the 6 December 1999 magnitude (Mw) 7.0 Karluk Lake earthquake beneath southern Kodiak Island, Alaska. The observed increase in earthquake activity consisted of small (ML < 1.3), shallow (Z < 5.0 km) events. These earthquakes were located beneath Mount Martin, Mount Mageik, Trident Volcano, and the Katmai caldera and began within the coda of the Karluk Lake mainshock. All of these earthquakes occurred in areas and magnitude ranges that are typical for the background seismicity observed in the Katmai area. Seismicity rates returned to background levels 8 to 13 hours after the Karluk Lake mainshock. The close temporal relationship with the Karluk Lake mainshock, the onset of activity within the mainshock coda, and the simultaneous increase beneath four separate volcanic centers all suggest these earthquakes were remotely triggered. Modeling of the Coulomb stress changes from the mainshock for optimally oriented faults suggests negligible change in static stress beneath the Katmai volcanoes. This result favors models that involve dynamic stresses as the mechanism for triggered seismicity at Katmai.

  1. Research on earthquake prediction from infrared cloud images

    NASA Astrophysics Data System (ADS)

    Fan, Jing; Chen, Zhong; Yan, Liang; Gong, Jing; Wang, Dong

    2015-12-01

    In recent years, the occurrence of large earthquakes is frequent all over the word. In the face of the inevitable natural disasters, the prediction of the earthquake is particularly important to avoid more loss of life and property. Many achievements in the field of predict earthquake from remote sensing images have been obtained in the last few decades. But the traditional prediction methods presented do have the limitations of can't forecast epicenter location accurately and automatically. In order to solve the problem, a new predicting earthquakes method based on extract the texture and emergence frequency of the earthquake cloud is proposed in this paper. First, strengthen the infrared cloud images. Second, extract the texture feature vector of each pixel. Then, classified those pixels and converted to several small suspected area. Finally, tracking the suspected area and estimate the possible location. The inversion experiment of Ludian earthquake show that this approach can forecast the seismic center feasible and accurately.

  2. Current affairs in earthquake prediction in Japan

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    2015-12-01

    As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.

  3. The dependence of peak horizontal acceleration on magnitude, distance, and site effects for small-magnitude earthquakes in California and eastern North America

    USGS Publications Warehouse

    Campbell, K.W.

    1989-01-01

    One-hundred and ninety free-field accelerograms recorded on deep soil (>10 m deep) were used to study the near-source scaling characteristics of peak horizontal acceleration for 91 earthquakes (2.5 ??? ML ??? 5.0) located primarily in California. An analysis of residuals based on an additional 171 near-source accelerograms from 75 earthquakes indicated that accelerograms recorded in building basements sited on deep soil have 30 per cent lower acclerations, and that free-field accelerograms recorded on shallow soil (???10 m deep) have 82 per cent higher accelerations than free-field accelerograms recorded on deep soil. An analysis of residuals based on 27 selected strong-motion recordings from 19 earthquakes in Eastern North America indicated that near-source accelerations associated with frequencies less than about 25 Hz are consistent with predictions based on attenuation relationships derived from California. -from Author

  4. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    SciTech Connect

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-06-20

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  5. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  6. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  7. The magnitude of events following a strong earthquake: and a pattern recognition approach applied to Italian seismicity

    NASA Astrophysics Data System (ADS)

    Gentili, Stefania; Di Giovambattista, Rita

    2016-04-01

    In this study, we propose an analysis of the earthquake clusters occurred in Italy from 1980 to 2015. In particular, given a strong earthquake, we are interested to identify statistical clues to forecast whether a subsequent strong earthquake will follow. We apply a pattern recognition approach to verify the possible precursors of a following strong earthquake. Part of the analysis is based on the observation of the cluster during the first hours/days after the first large event. The features adopted are, among the others, the number of earthquakes, the radiated energy and the equivalent source area. The other part of the analysis is based on the characteristics of the first strong earthquake, like its magnitude, depth, focal mechanism, the tectonic position of the source zone. The location of the cluster inside the Italia territory is of particular interest. In order to characterize the precursors depending on the cluster type, we used decision trees as classifiers on single precursor separately. The performances of the classification are tested by leave-one-out method. The analysis is done using different time-spans after the first strong earthquake, in order to simulate the increase of information available as time passes during the seismic clusters. The performances are assessed in terms of precision, recall and goodness of the single classifiers and the ROC graph is shown.

  8. Multidimensional earthquake frequency distributions consistent with self-organization of complex systems: The interdependence of magnitude, interevent time and interevent distance

    NASA Astrophysics Data System (ADS)

    Tzanis, A.; Vallianatos, F.

    2012-04-01

    the G-R law predicts, but also to the interevent time and distance by means of well defined power-laws. We also demonstrate that interevent time and distance are not independent of each other, but also interrelated by means of well defined power-laws. We argue that these relationships are universal and valid for both local and regional tectonic grains and seismicity patterns. Eventually, we argue that the four-dimensional hypercube formed by the joint distribution of earthquake frequency, magnitude, interevent time and interevent distance comprises a generalized distribution of the G-R type which epitomizes the temporal and spatial interdependence of earthquake activity, consistent with expectation for a stationary or evolutionary critical system. Finally, we attempt to discuss the emerging generalized frequency distribution in terms of non-extensive statistical physics. Acknowledgments. This work was partly supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project "Integrated understanding of Seismicity, using innovative methodologies of Fracture Mechanics along with Earthquake and Non-Extensive Statistical Physics - Application to the geodynamic system of the Hellenic Arc - SEISMO FEAR HELLARC".

  9. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake.

    PubMed

    Bassett, Dan; Sandwell, David T; Fialko, Yuri; Watts, Anthony B

    2016-03-03

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan.

  10. Relationship between isoseismal area and magnitude of historical earthquakes in Greece by a hybrid fuzzy neural network method

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-A.; Sokos, E.

    2012-01-01

    In this paper we suggest the use of diffusion-neural-networks, (neural networks with intrinsic fuzzy logic abilities) to assess the relationship between isoseismal area and earthquake magnitude for the region of Greece. It is of particular importance to study historical earthquakes for which we often have macroseismic information in the form of isoseisms but it is statistically incomplete to assess magnitudes from an isoseismal area or to train conventional artificial neural networks for magnitude estimation. Fuzzy relationships are developed and used to train a feed forward neural network with a back propagation algorithm to obtain the final relationships. Seismic intensity data from 24 earthquakes in Greece have been used. Special attention is being paid to the incompleteness and contradictory patterns in scanty historical earthquake records. The results show that the proposed processing model is very effective, better than applying classical artificial neural networks since the magnitude macroseismic intensity target function has a strong nonlinearity and in most cases the macroseismic datasets are very small.

  11. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

  12. On the modified Mercalli intensities and magnitudes of the 1811-1812 New Madrid earthquakes

    USGS Publications Warehouse

    Hough, S.E.; Armbruster, J.G.; Seeber, L.; Hough, J.F.

    2000-01-01

    We reexamine original felt reports from the 1811-1812 New Madrid earthquakes and determine revised isoseismal maps for the three principal mainshocks. In many cases we interpret lower values than those assigned by earlier studies. In some cases the revisions result from an interpretation of original felt reports with an appreciation for site response issues. Additionally, earlier studies had assigned modified Mercalli intensity (MMI) values of V-VII to a substantial number of reports that we conclude do not describe damage commensurate with intensities this high. We investigate several approaches to contouring the MMI values using both analytical and subjective methods. For the first mainshock on 02:15 LT December 16, 1811, our preferred contouring yields M??7.2-7.3 using the area-moment regressions of Johnston [1996]. For the 08:00 LT on January 23, 1812, and 03:45 LT on February 7, 1812, mainshocks, we obtain M??7.0 and M??7.4-7.5, respectively. Our magnitude for the February mainshock is consistent with the established geometry of the Reelfoot fault, which all evidence suggests to have been the causative structure for this event. We note that the inference of lower magnitudes for the New Madrid events implies that site response plays a significant role in controlling seismic hazard at alluvial sites in the central and eastern United States. We also note that our results suggest that thrusting may have been the dominant mechanism of faulting associated with the 1811-1812 sequence. Copyright 2000 by the American Geophysical Union.

  13. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  14. An updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Chen, Kuei-Pao; Tsai, Yi-Ben

    2016-03-01

    The main goal of this study was to develop an updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes that are compatible with the Harvard M w . We hope that such a catalog of earthquakes will provide a fundamental database for definitive studies of the distribution of earthquakes in Taiwan as a function of space, time, and magnitude, as well as for realistic assessments of seismic hazards in Taiwan. In this study, for completeness and consistency, we start with a previously published catalog of earthquakes from 1900 to 2006 with homogenized M w magnitudes. We update the earthquake data through 2014 and supplement the database with 188 additional events for the time period of 1900-1935 that were found in the literature. The additional data resulted in a lower magnitude from M w 5.5-5.0. The broadband-based Harvard M w , United States Geological Survey (USGS) M, and Broadband Array in Taiwan for Seismology (BATS) M w are preferred in this study. Accordingly, we use empirical relationships with the Harvard M w to transform our old converted M w values to new converted M w values and to transform the original BATS M w values to converted BATS M w values. For individual events, the adopted M w is chosen in the following order: Harvard M w > USGS M > converted BATS M w > new converted M w . Finally, we discover that use of the adopted M w removes a data gap at magnitudes greater than or equal to 5.0 in the original catalog during 1985-1991. The new catalog is now complete for M w ≥ 5.0 and significantly improves the quality of data for definitive study of seismicity patterns, as well as for realistic assessment of seismic hazards in Taiwan.

  15. Time-predictable recurrence model for large earthquakes

    SciTech Connect

    Shimazaki, K.; Nakata, T.

    1980-04-01

    We present historical and geomorphological evidence of a regularity in earthquake recurrence at three different sites of plate convergence around the Japan arcs. The regularity shows that the larger an earthquake is, the longer is the following quiet period. In other words, the time interval between two successive large earthquakes is approximately proportional to the amount of coseismic displacement of the preceding earthquake and not of the following earthquake. The regularity enables us, in principle, to predict the approximate occurrence time of earthquakes. The data set includes 1) a historical document describing repeated measurements of water depth at Murotsu near the focal region of Nankaido earthquakes, 2) precise levelling and /sup 14/C dating of Holocene uplifted terraces in the southern boso peninsula facing the Sagami trough, and 3) similar geomorphological data on exposed Holocene coral reefs in Kikai Island along the Ryukyu arc.

  16. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of

  17. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  18. Classification of magnitude 7 earthquakes which occurred after 1885 in Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Ishibe, T.; Satake, K.; Shimazaki, K.; Nishiyama, A.

    2010-12-01

    Tokyo Metropolitan area is situated in tectonically complex region; both the Pacific (PAC) and Philippine Sea (PHS) plates are subducting from east and south, respectively, beneath the Kanto region. As a result, various types of earthquakes occur in this region; i.e., shallow crustal earthquakes, intraplate (slab) earthquakes within PHS, within PAC, and interplate earthquakes between continental plate and PHS, and between PHS and PAC. Among these, the largest earthquakes are Kanto earthquakes (M~8) occurring between the continental plate and PHS. The average recurrence interval is estimated to be 200 - 400 years (Earthq. Res. Comm., 2004), and hence, urgency of the next Kanto earthquake is thought to be low considering the lapse time (~87 yrs.) from the most recent Kanto earthquake in 1923. However, urgency of the other types of earthquakes with M~7 is high; Earthq. Res. Comm. (2004) calculated the probability of occurrence during the next 30 years as 70 %, based on the facts that five M~7 earthquakes (i.e., the 1894 Meiji-Tokyo, 1895 and 1921 Ibaraki-Ken-Nanbu, 1922 Uraga channel and 1987 Chiba-Ken Toho-Oki earthquakes) occurred since 1885. However, types of earthquakes are not well known especially for the 1894 Meiji-Tokyo and 1895 Ibaragi-Ken-Nanbu earthquakes due to low quality of data. Thus, it is important to classify these earthquakes into above-described intraplate or interplate earthquakes and to estimate their occurrence frequency. Ishibe et al. (2009a, 2009b) compiled previous studies and data for these five earthquakes. In this study, we report the preliminary result of focal depth and mechanism for the 1895 and 1921 Ibaraki-Ken-Nanbu earthquakes. The epicenter of the 1895 Ibaraki-Ken-Nanbu earthquake (M 7.2; Utsu, 1979) is discussed by various studies (e.g., Usami, 1973; Ishibashi, 1975; Katsumata, 1975; Utsu, 1979). However, few studies have discussed the hypocentral depth. The hypocentral depth is estimated to be 75 ~ 85 km using S-P time at Tokyo

  19. Earthquake prediction: Criterion for a tilt anomaly

    SciTech Connect

    Buckley, C.P.; Kohlenberger, C.W.

    1980-07-10

    A current approach to the problem of defining and detecting anomalous tilt behavior is presented. To establish what is considered to be normal tilt behavior, we isolate systematic signals such as hydrologic, thermal, tidal, cultural, and equipment-related effects from the tilt data. The kinds of tilt signals which remain after rejection of the systematic signals are designated by ourselves as residual tilt. Residual tilt consists of asystematic random noise and anomalous tilts. To affirm or deny the contention that an anomalous tilt is present in the data requires the formulation of a statistically valid judgment criteria. Our approach adopts the hypothesis that the random walk model is not significantly different from the residual tilt and allows the application of standard statistical tests to the problem of detecting anomalous varia ions in random noise. In our study of the data analyzed so far, we find that the boundary for detectability is inverse frequency dependent, and this limits the way in which anomalies can be treated. The fact that the magnitude of the anomaly decreases as the tilt data span increases suggests that further criterion development is necessary and tends to imply that longer anomalies will not be detected unless there is a correspondingly larger amplitude. From our studies of three earthquake-association anomalies this does not appear to be the case.

  20. Gambling score in earthquake prediction analysis

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  1. Four Examples of Short-Term and Imminent Prediction of Earthquakes

    NASA Astrophysics Data System (ADS)

    zeng, zuoxun; Liu, Genshen; Wu, Dabin; Sibgatulin, Victor

    2014-05-01

    We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same

  2. Spectral P-wave magnitudes, magnitude spectra and other source parameters for the 1990 southern Sudan and the 2005 Lake Tanganyika earthquakes

    NASA Astrophysics Data System (ADS)

    Moussa, Hesham Hussein Mohamed

    2008-10-01

    Teleseismic Broadband seismograms of P-waves from the May 1990 southern Sudan and the December, 2005 Lake Tanganyika earthquakes; the western branch of the East African Rift System at different azimuths have been investigated on the basis of magnitude spectra. The two earthquakes are the largest shocks in the East African Rift System and its extension in southern Sudan. Focal mechanism solutions along with geological evidences suggest that the first event represents a complex style of the deformation at the intersection of the northern branch of the western branch of the East African Rift and Aswa Shear Zone while the second one represents the current tensional stress on the East African Rift. The maximum average spectral magnitude for the first event is determined to be 6.79 at 4 s period compared to 6.33 at 4 s period for the second event. The other source parameters for the two earthquakes were also estimated. The first event had a seismic moment over fourth that of the second one. The two events are radiated from patches of faults having radii of 13.05 and 7.85 km, respectively. The average displacement and stress drop are estimated to be 0.56 m and 1.65 MPa for the first event and 0.43 m and 2.20 MPa for the second one. The source parameters that describe inhomogeneity of the fault are also determined from the magnitude spectra. These additional parameters are complexity, asperity radius, displacements across the asperity and ambient stress drop. Both events produce moderate rupture complexity. Compared to the second event, the first event is characterized by relatively higher complexity, a low average stress drop and a high ambient stress. A reasonable explanation for the variations in these parameters may suggest variation in the strength of the seismogenic fault which provides the relations between the different source parameters. The values of stress drops and the ambient stresses estimated for both events indicate that these earthquakes are of interplate

  3. Giant seismites and megablock uplift in the East African Rift: evidence for Late Pleistocene large magnitude earthquakes.

    PubMed

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions.

  4. Giant Seismites and Megablock Uplift in the East African Rift: Evidence for Late Pleistocene Large Magnitude Earthquakes

    PubMed Central

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M.

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic ‘megablock complex’ that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601

  5. Numerical Shake Prediction for Earthquake Early Warning: More Precise and Rapid Prediction even for Deviated Distribution of Ground Shaking of M6-class Earthquakes

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2015-12-01

    In many methods of the present EEW systems, hypocenter and magnitude are determined quickly, and then the strengths of ground motions are predicted using the hypocentral distance and magnitude based on a ground motion prediction equation (GMPE), which usually leads the prediction of concentric distribution. However, actual ground shaking is not always concentric, even when site amplification is corrected. At a common site, the strengths of shaking may be much different among earthquakes even when their hypocentral distances and magnitudes are almost the same. For some cases, PGA differs more than 10 times, which leads to imprecise prediction in EEW. Recently, Numerical Shake Prediction method was proposed (Hoshiba and Aoki, 2015), in which the present ongoing wavefield of ground shaking is estimated using data assimilation technique, and then future wavefield is predicted based on physics of wave propagation. Information of hypocentral location and magnitude is not required in this method. Because future is predicted from the present condition, it is possible to address the issue of the non-concentric distribution. Once the deviated distribution is actually observed in ongoing wavefield, future distribution is predicted accordingly to be non-concentric. We will indicate examples of M6-class earthquakes occurred at central Japan, in which strengths of shaking were observed to non-concentrically distribute. We will show their predictions using Numerical Shake Prediction method. The deviated distribution may be explained by inhomogeneous distribution of attenuation. Even without attenuation structure, it is possible to address the issue of non-concentric distribution to some extent once the deviated distribution is actually observed in ongoing wavefield. If attenuation structure is introduced, we can predict it before actual observation. The information of attenuation structure leads to more precise and rapid prediction in Numerical Shake Prediction method for EEW.

  6. Preliminary Results on Earthquake Recurrence Intervals, Rupture Segmentation, and Potential Earthquake Moment Magnitudes along the Tahoe-Sierra Frontal Fault Zone, Lake Tahoe, California

    NASA Astrophysics Data System (ADS)

    Howle, J.; Bawden, G. W.; Schweickert, R. A.; Hunter, L. E.; Rose, R.

    2012-12-01

    Utilizing high-resolution bare-earth LiDAR topography, field observations, and earlier results of Howle et al. (2012), we estimate latest Pleistocene/Holocene earthquake-recurrence intervals, propose scenarios for earthquake-rupture segmentation, and estimate potential earthquake moment magnitudes for the Tahoe-Sierra frontal fault zone (TSFFZ), west of Lake Tahoe, California. We have developed a new technique to estimate the vertical separation for the most recent and the previous ground-rupturing earthquakes at five sites along the Echo Peak and Mt. Tallac segments of the TSFFZ. At these sites are fault scarps with two bevels separated by an inflection point (compound fault scarps), indicating that the cumulative vertical separation (VS) across the scarp resulted from two events. This technique, modified from the modeling methods of Howle et al. (2012), uses the far-field plunge of the best-fit footwall vector and the fault-scarp morphology from high-resolution LiDAR profiles to estimate the per-event VS. From this data, we conclude that the adjacent and overlapping Echo Peak and Mt. Tallac segments have ruptured coseismically twice during the Holocene. The right-stepping, en echelon range-front segments of the TSFFZ show progressively greater VS rates and shorter earthquake-recurrence intervals from southeast to northwest. Our preliminary estimates suggest latest Pleistocene/ Holocene earthquake-recurrence intervals of 4.8±0.9x103 years for a coseismic rupture of the Echo Peak and Mt. Tallac segments, located at the southeastern end of the TSFFZ. For the Rubicon Peak segment, northwest of the Echo Peak and Mt. Tallac segments, our preliminary estimate of the maximum earthquake-recurrence interval is 2.8±1.0x103 years, based on data from two sites. The correspondence between high VS rates and short recurrence intervals suggests that earthquake sequences along the TSFFZ may initiate in the northwest part of the zone and then occur to the southeast with a lower

  7. Advance Prediction of the March 11, 2011 Great East Japan Earthquake: A Missed Opportunity for Disaster Preparedness

    NASA Astrophysics Data System (ADS)

    Davis, C. A.; Keilis-Borok, V. I.; Kossobokov, V. G.; Soloviev, A.

    2012-12-01

    There was a missed opportunity for implementing important disaster preparedness measures following an earthquake prediction that was announced as an alarm in mid-2001. This intermediate-term middle-range prediction was the initiation of a chain of alarms that successfully detected the time, region, and magnitude range for the magnitude 9.0 March 11, 2011 Great East Japan Earthquake. The prediction chains were made using an algorithm called M8 and is the latest of many predictions tested worldwide for more than 25 years, the results of which show at least a 70% success rate. The earthquake detection could have been utilized to implement measures and improve earthquake preparedness in advance; unfortunately this was not done, in part due to the predictions' limited distribution and the lack of applying existing methods for using intermediate-term predictions to make decisions for taking action. The resulting earthquake and induced tsunami caused tremendous devastation to north-east Japan. Methods that were known in advance of the predication and further advanced during the prediction timeframe are presented in a scenario describing some possibilities on how the 2001 prediction may have been utilized to reduce significant damage, including damage to the Fukushima nuclear power plant, and to show prudent cost-effective actions can be taken if the prediction certainty is known, but not necessarily high. The purpose of this presentation is to show how the prediction information can be strategically used to enhance disaster preparedness and reduce future impacts from the world's largest earthquakes.

  8. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  9. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 1. Repeating earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate H.; Uchida, Naoki

    2012-02-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  10. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  11. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  12. Dynamic triggering of low magnitude earthquakes in the Middle American Subduction Zone

    NASA Astrophysics Data System (ADS)

    Escudero, C. R.; Velasco, A. A.

    2010-12-01

    We analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. We use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw >7) earthquakes, we first identify local earthquakes that occurred before and after the mainshocks. We then group the local earthquakes within a cluster radius between 75 to 200 km. We obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as local cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Due to lateral variations of the dip along the subducted oceanic plate, we divide the Mexican subduction zone in four segments. We then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes. We identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. We find no depend of seismicity changes due to focal mainshock mechanism.

  13. Analysis of Italian Earthquake catalogs in the context of intermediate-term prediction problem

    NASA Astrophysics Data System (ADS)

    Romashkova, Leontina; Peresan, Antonella

    2013-06-01

    We perform a comparative analysis of regional and global earthquake catalogs currently available for the territory of Italy. We consider: (a) instrumental seismic catalogs provided by the Istituto Nazionale di Geofisica e Vulcanologia, Roma (INGV) for earthquake forecasting experiment in Italy within the Collaboratory for the Study of Earthquake Predictability (CSEP); (b) Global Hypocenters' Data provided by the USGS/NEIC, which is currently used in the real-time earthquake prediction experiment by CN and M8S algorithms in Italy, and (c) seismological Bulletin provided by the International Seismological Centre (ISC). We discuss advantages and shortcomings of these catalogs in the context of intermediate-term middle-range earthquake prediction problem in Italy, including the possibility of the catalog's combined or integrated use. Magnitude errors in the catalog can distort statistics of success-to-failure scoring and eventually falsify testing results. Therefore, the analysis of systematic and random errors in magnitude presented in Appendixes can be of significance in its own right.

  14. Magnitude-based discrimination of man-made seismic events from naturally occurring earthquakes in Utah, USA

    NASA Astrophysics Data System (ADS)

    Koper, Keith D.; Pechmann, James C.; Burlacu, Relu; Pankow, Kristine L.; Stein, Jared; Hale, J. Mark; Roberson, Paul; McCarter, Michael K.

    2016-10-01

    We investigate using the difference between local (ML) and coda/duration (MC) magnitude to discriminate man-made seismic events from naturally occurring tectonic earthquakes in and around Utah. For 6846 well-located earthquakes in the Utah region, we find that ML-MC is on average 0.44 magnitude units smaller for mining-induced seismicity (MIS) than for tectonic seismicity (TS). Our interpretation of this observation is that MIS occurs within near-surface low-velocity layers that act as a waveguide and preferentially increase coda duration relative to peak amplitude, while the vast majority of TS occurs beneath the near-surface waveguide. A second data set of 3723 confirmed or probable explosions in the Utah region also has significantly lower ML-MC values than TS, likely for the same reason as the MIS. These observations suggest that ML-MC is useful as a depth indicator and could discriminate small explosions and mining-induced earthquakes from deeper, naturally occurring earthquakes at local-to-regional distances.

  15. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    PubMed

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event. A probabilistic model was designed that included the following variables: demand for hospital care; time of arrival at the hospitals; type of medical treatment; reason for hospital admission; and the need for specialized care like hemodialysis, blood transfusions, and surgical procedures. The values for these variables were obtained through a literature search of the databases of the MEDLINE medical bibliography, the Cochrane and SciELO libraries, and Google Scholar for information on earthquakes over the last 30 years of over magnitude 6.0 on the moment magnitude scale. If a high-magnitude earthquake were to occur in Lima, it was estimated that between 23,328 and 178,387 injured would go to hospitals, of which between 4,666 and 121,303 would require inpatient care, while between 18,662 and 57,084 could be treated as outpatients. It was estimated that there would be an average of 8,768 cases of crush syndrome and 54,217 cases of other health problems. Enough blood would be required for 8,761 wounded in the first 24 hours. Furthermore, it was expected that there would be a deficit of hospital beds and operating theaters due to the high demand. Sudden and violent disasters, such as earthquakes, represent significant challenges for health systems and services. This study shows the deficit of preparation and capacity to respond to a possible high-magnitude earthquake. The study also showed there are not enough resources to face mega-disasters, especially in large cities. Bambarén C , Uyen A

  16. Geotechnical effects of the 2015 magnitude 7.8 Gorkha, Nepal, earthquake and aftershocks

    USGS Publications Warehouse

    Moss, Robb E. S.; Thompson, Eric; Kieffer, D Scott; Tiwari, Binod; Hashash, Youssef M A; Acharya, Indra; Adhikari, Basanta; Asimaki, Domniki; Clahan, Kevin B.; Collins, Brian D.; Dahal, Sachindra; Jibson, Randall W.; Khadka, Diwakar; Macdonald, Amy; Madugo, Chris L M; Mason, H Benjamin; Pehlivan, Menzer; Rayamajhi, Deepak; Uprety, Sital

    2015-01-01

    This article summarizes the geotechnical effects of the 25 April 2015 M 7.8 Gorkha, Nepal, earthquake and aftershocks, as documented by a reconnaissance team that undertook a broad engineering and scientific assessment of the damage and collected perishable data for future analysis. Brief descriptions are provided of ground shaking, surface fault rupture, landsliding, soil failure, and infrastructure performance. The goal of this reconnaissance effort, led by Geotechnical Extreme Events Reconnaissance, is to learn from earthquakes and mitigate hazards in future earthquakes.

  17. Predictability of Great Earthquakes: The 25 April 2015 M7.9 Gorkha (Nepal)

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2015-12-01

    Understanding of seismic process in terms of non-linear dynamics of a hierarchical system of blocks-and-faults and deterministic chaos, has already led to reproducible intermediate-term middle-range prediction of the great and significant earthquakes. The technique based on monitoring charcteristics of seismic static in an area proportional to source size of incipient earthquake is confirmed at the confidence level above 99% by statistics of Global Testing in forward application from 1992 to the present. The semi-annual predictions determined for the next half-year by the algorithm M8 aimed (i) at magnitude 8+ earthquakes in 262 circles of investigation, CI's, each of 667-km radius and (ii) at magnitude 7.5+ earthquakes in 180 CI's, each of 427-km radius are communicated each January and July to the Global Test Observers (about 150 today). The pre-fixed location of CI's cover all seismic regions where the M8 algorithm could run in its original version that requires annual rate of activity of 16 or more main shocks. According to predictions released in January 2015 for the first half of 2015, the 25 April 2015 Nepal MwGCMT = 7.9 earthquake falls outside the Test area for M7.5+, while its epicenter is within the accuracy limits of the alarm area for M8.0+ that spread along 1300 km of Himalayas. We note that (i) the earthquake confirms identification of areas prone to strong earthquakes in Himalayas by pattern recognition (Bhatia et al. 1992) and (ii) it would have been predicted by the modified version of the M8 algorithm aimed at M7.5+. The modified version is adjusted to a low level of earthquake detection, about 10 main shocks per year, and is tested successfully by Mojarab et al. (2015) in application to the recent earthquakes in Eastern Anatolia (23 October 2011, M7.3 Van earthquake) and Iranian Plateau (16 April 2013, M7.7 Saravan and the 24 September 2013, M7.7 Awaran earthquakes).

  18. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  19. Earthquake prediction: the interaction of public policy and science.

    PubMed

    Jones, L M

    1996-04-30

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

  20. Earthquake prediction: the interaction of public policy and science.

    PubMed Central

    Jones, L M

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

  1. Earthquake prediction: The interaction of public policy and science

    USGS Publications Warehouse

    Jones, L.M.

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

  2. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Rizza, M.; Ritz, J.-F.; Braucher, R.; Vassallo, R.; Prentice, C.; Mahan, S.; McGill, S.; Chauvet, A.; Marco, S.; Todbileg, M.; Demberel, S.; Bourles, D.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans-particularly well preserved in the arid environment of the Gobi region-allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is ~1 mm yr-1 along the WIB and EIB segments and ~0.5 mm yr-1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78-7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of ~2500-5200 yr for past

  3. Earthquake prediction on boundaries of the Arabian Plate: premonitory chains of small earthquakes

    NASA Astrophysics Data System (ADS)

    Yaniv, M.; Agnon, A.; Shebalin, P.

    2009-12-01

    The RTP method is a probabilistic prediction method for strong earthquakes (Keilis-Borok et al., 2004). Based on simple pattern recognition algorithms and tuned on historical seismic catalogs, RTP has been running as a prediction in advance experiment since 1997. We present a similar system aimed at improving the algorithm and tuning it to regional catalogs, focusing on the Arabian Plate. RTP is based on recognition of "Earthquake chains", microseismic patterns that capture a rise in activity and in correlation range. A chain is defined as a closed set of "neighbor events" with epicenters and times of occurrences separated by less than a spatial parameter R0 and a temporal parameter τ, respectively. The seismic catalog can be viewed as a non-directional graph, with earthquakes as vertices, neighbor pairs as edges and chains as connected components of the graph. Various algorithms were tried, based on different concepts. Some using graph theory concepts, and others focusing on the data structure in the catalog. All algorithms aim at recognizing neighboring pairs of events, and combining the pairs into chains.They relies on a number of parameters: -Minimum length for a valid chain L0 -Weights for the spatial and temporal thresholds -Target magnitude: the minimum magnitude we aim to predict -Cutoff value: the minimum magnitude to be taken into account The output for an algorithms is a set of chains. The list is filtered for chains longer than L0. The 2D parameter space was mapped. For every pair of R0 and τ three characteristics were calculated: -Number of chains found -Mean number of events in a chain -Mean size (Max distance between events in a chain) of chains Each of these plots as a surface, showing dependance on the parameters R0 and τ. The most recent version of the algorithm was run on the NEIC catalog. It recognizes three chains longer than 15 events, with Target events, shown in the figure. In the GII catalog only two chains are found. Both start with a

  4. Multiple asperity model for earthquake prediction

    USGS Publications Warehouse

    Wyss, M.; Johnston, A.C.; Klein, F.W.

    1981-01-01

    Large earthquakes often occur as multiple ruptures reflecting strong variations of stress level along faults. Dense instrument networks with which the volcano Kilauea is monitored provided detailed data on changes of seismic velocity, strain accumulation and earthquake occurrence rate before the 1975 Hawaii 7.2-mag earthquake. During the ???4 yr of preparation time the mainshock source volume had separated into crustal volumes of high stress levels embedded in a larger low-stress volume, showing respectively high- and low-stress precursory anomalies. ?? 1981 Nature Publishing Group.

  5. Incorporating Love- and Rayleigh-wave magnitudes, unequal earthquake and explosion variance assumptions and interstation complexity for improved event screening

    SciTech Connect

    Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia; Shumway, Robert

    2009-01-01

    Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleigh M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the

  6. A Magnitude 7.1 Earthquake in the Tacoma Fault Zone-A Plausible Scenario for the Southern Puget Sound Region, Washington

    USGS Publications Warehouse

    Gomberg, Joan; Sherrod, Brian; Weaver, Craig; Frankel, Art

    2010-01-01

    The U.S. Geological Survey and cooperating scientists have recently assessed the effects of a magnitude 7.1 earthquake on the Tacoma Fault Zone in Pierce County, Washington. A quake of comparable magnitude struck the southern Puget Sound region about 1,100 years ago, and similar earthquakes are almost certain to occur in the future. The region is now home to hundreds of thousands of people, who would be at risk from the shaking, liquefaction, landsliding, and tsunamis caused by such an earthquake. The modeled effects of this scenario earthquake will help emergency planners and residents of the region prepare for future quakes.

  7. Multiscale approach to the predictability of earthquakes and of synthetic SOC sequences

    NASA Astrophysics Data System (ADS)

    Peresan, A.; Panza, G. F.

    2003-04-01

    The power-law scaling expressed by the Gutenberg-Richter (GR) law is the main argument in favour of the Self-Organised Criticality (SOC) of seismic phenomena. Nevertheless the limits of validity of the GR law and the phenomenology reproduced by the SOC models, as well as their consequences for earthquake predictability, still remain quite undefined. According to the Multiscale Seismicity (MS) model, the GR law describes adequately only the ensemble of earthquakes that are geometrically small with respect to the dimensions of the analysed region. The MS model and its implications for intermediate-term medium-range earthquake predictions are thus examined, considering both the seismicity observed in the Italian territory and the synthetic sequences of events generated by a SOC model. The predictability of the large events is evaluated by means of the algorithms CN and M8, based on a quantitative analysis of the seismic flow within a delimited region, which allow for the prediction of the earthquakes with magnitude greater than a fixed threshold Mo. Considering the application of CN and M8 to the Italian territory, we show that, in agreement with the MS model, these algorithms make use of the information carried by small and moderate earthquakes, following the GR law, to predict the strong earthquakes, which are infrequent and often arbitrarily considered characteristic events inside the regions delimited for prediction purposes. Similarly, the application of the algorithm CN for the prediction of the largest events in the synthetic SOC sequences, indicates that a certain predictability can be attained, when the MS model is taken into account. These results suggest that the similarity between the seismic flow and the SOC sequences goes beyond the average features of scale-invariance. In fact, while the GR law describes an average feature of seismicity, CN algorithm is checking for the deviations from such trend, which may characterise the sequence of events before the

  8. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  9. Earthquake ground-motion prediction equations for eastern North America

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    2006-01-01

    New earthquake ground-motion relations for hard-rock and soil sites in eastern North America (ENA), including estimates of their aleatory uncertainty (variability) have been developed based on a stochastic finite-fault model. The model incorporates new information obtained from ENA seismographic data gathered over the past 10 years, including three-component broadband data that provide new information on ENA source and path effects. Our new prediction equations are similar to the previous ground-motion prediction equations of Atkinson and Boore (1995), which were based on a stochastic point-source model. The main difference is that high-frequency amplitudes (f ??? 5 Hz) are less than previously predicted (by about a factor of 1.6 within 100 km), because of a slightly lower average stress parameter (140 bars versus 180 bars) and a steeper near-source attenuation. At frequencies less than 5 Hz, the predicted ground motions from the new equations are generally within 25% of those predicted by Atkinson and Boore (1995). The prediction equations agree well with available ENA ground-motion data as evidenced by near-zero average residuals (within a factor of 1.2) for all frequencies, and the lack of any significant residual trends with distance. However, there is a tendency to positive residuals for moderate events at high frequencies in the distance range from 30 to 100 km (by as much as a factor of 2). This indicates epistemic uncertainty in the prediction model. The positive residuals for moderate events at < 100 km could be eliminated by an increased stress parameter, at the cost of producing negative residuals in other magnitude-distance ranges; adjustment factors to the equations are provided that may be used to model this effect.

  10. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  11. The marine-geological fingerprint of the 2011 Magnitude 9 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Strasser, M.; Ikehara, K.; Usami, K.; Kanamatsu, T.; McHugh, C. M.

    2015-12-01

    The 2011 Tohoku-oki earthquake was the first great subduction zone earthquake, for which the entire activity was recorded by offshore geophysical, seismological and geodetic instruments and for which direct observation for sediment re-suspension and re-deposition was documented across the entire margin. Furthermore, the resulting tsunami and subsequent tragic incident at Fukushima nuclear power station, has induced short-lived radionuclides which can be used for tracer experiments in the natural offshore sedimentary systems. Here we present a summary on the present knowledge on the 2011 event beds in the offshore environment and integrate data from offshore instruments with sedimentological, geochemical and physical property data on core samples to report various types of event deposits resulting from earthquake-triggered submarine landslides, downslope sediment transport by turbidity currents, surficial sediment remobilization from the agitation and resuspension of unconsolidated surface sediments by the earthquake ground motion, as well as tsunami-induced sediment transport from shallow waters to the deep sea. The rapidly growing data set from offshore Tohoku further allows for discussion about (i) what we can learn from this well-documented event for general submarine paleoseismology aspects and (ii) potential of the Japan Trench to use the geological record of the Japan Trench to reconstruct a long-term history of great subduction zone earthquakes.

  12. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  13. Strong ground motion prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Z. R.; Tao, X. X.; Cui, A. P.

    2015-09-01

    For regions lack of strong ground motion records, a method is developed to predict strong ground motion by small earthquake records from local broadband digital earthquake networks. Sichuan and Yunnan regions, located in southwestern China, are selected as the targets. Five regional source and crustal medium parameters are inversed by micro-Genetic Algorithm. These parameters are adopted to predict strong ground motion for moment magnitude (Mw) 5.0, 6.0 and 7.0. Strong ground motion data are compared with the results, most of the result pass through ideally the data point plexus, except the case of Mw 7.0 in Sichuan region, which shows an obvious slow attenuation. For further application, this result is adopted in probability seismic hazard assessment (PSHA) and near-field strong ground motion synthesis of the Wenchuan Earthquake.

  14. New models for frequency content prediction of earthquake records based on Iranian ground-motion data

    NASA Astrophysics Data System (ADS)

    Yaghmaei-Sabegh, Saman

    2015-10-01

    This paper presents the development of new and simple empirical models for frequency content prediction of ground-motion records to resolve the assumed limitations on the useable magnitude range of previous studies. Three period values are used in the analysis for describing the frequency content of earthquake ground-motions named as the average spectral period ( T avg), the mean period ( T m), and the smoothed spectral predominant period ( T 0). The proposed models could predict these scalar indicators as function of magnitude, closest site-to-source distance and local site condition. Three site classes as rock, stiff soil, and soft soil has been considered in the analysis. The results of the proposed relationships have been compared with those of other published models. It has been found that the resulting regression equations can be used to predict scalar frequency content estimators over a wide range of magnitudes including magnitudes below 5.5.

  15. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Prentice, Carol S.; Rizza, M.; Ritz, J.F.; Baucher, R.; Vassallo, R.; Mahan, S.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans—particularly well preserved in the arid environment of the Gobi region—allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is∼1 mm yr–1 along the WIB and EIB segments and∼0.5 mm yr–1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78–7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of∼2500

  16. Scientific goals of the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Thatcher, W.

    1988-01-01

    Several unique circumstances of the Parkfield experiment provide unprecedented opportunities for significant advances in understanding the mechanics of earthquakes. to our knowledge, there is no other seismic zone anywhere where the time, place, and magnitude of an impending earthquake are specified as precisely. Moreover, the epicentral region is located on continental crust, is readily accessible, and can support a range of dense monitoring networks that are sited either on or very close to the expected rupture surface. As a result, the networks located at Parkfield are several orders of magnitude more sensitive than any previously deployed for monitoring earthquake precursors (a preearthquake change in strain, seismicity, and other geophysical parameters). In this respect the design of the Parkfield experiment resembles the rationale for constructing a new, more powerful nuclear particle accelerator:in both cases increased capabilities will test existing theories, reveal new phenomena, and suggest new research directions. 

  17. Estimating the magnitude of prediction uncertainties for the APLE model

    USDA-ARS?s Scientific Manuscript database

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  18. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  19. Earthquake prediction rumors can help in building earthquake awareness: the case of May the 11th 2011 in Rome (Italy)

    NASA Astrophysics Data System (ADS)

    Amato, A.; Arcoraci, L.; Casarotti, E.; Cultrera, G.; Di Stefano, R.; Margheriti, L.; Nostro, C.; Selvaggi, G.; May-11 Team

    2012-04-01

    Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquake prediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquake prediction credibility. Given the echo of this earthquake prediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

  20. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  1. Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults

    NASA Astrophysics Data System (ADS)

    McGuire, J. J.; Boettcher, M. S.; Jordan, T. H.

    2004-12-01

    A predominant view of continental seismicity postulates that all earthquakes initiate in a similar manner regardless of their eventual size and that earthquake triggering can be described by an Endemic Type Aftershock Sequence (ETAS) model [e.g. Ogata, 1988, Helmstetter and Sorenette 2002]. These null hypotheses cannot be rejected as an explanation for the relative abundances of foreshocks and aftershocks to large earthquakes in California [Helmstetter et al., 2003]. An alternative location for testing this hypothesis is mid-ocean ridge transform faults (RTFs), which have many properties that are distinct from continental transform faults: most plate motion is accommodated aseismically, many large earthquakes are slow events enriched in low-frequency radiation, and the seismicity shows depleted aftershock sequences and high foreshock activity. Here we use the 1996-2001 NOAA-PMEL hydroacoustic seismicity catalog for equatorial East Pacific Rise transform faults to show that the foreshock/aftershock ratio is two orders of magnitude greater than the ETAS prediction based on global RTF aftershock abundances. We can thus reject the null hypothesis that there is no fundamental distinction between foreshocks, mainshocks, and aftershocks on RTFs. We further demonstrate (retrospectively) that foreshock sequences on East Pacific Rise transform faults can be used to achieve statistically significant short-term prediction of large earthquakes (magnitude ≥ 5.4) with good spatial (15-km) and temporal (1-hr) resolution using the NOAA-PMEL catalogs. Our very simplistic approach produces a large number of false alarms, but it successfully predicts the majority (70%) of M≥5.4 earthquakes while covering only a tiny fraction (0.15%) of the total potential space-time volume with alarms. Therefore, it achieves a large probability gain (about a factor of 500) over random guessing, despite not using any near field data. The predictability of large EPR transform earthquakes suggests

  2. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  3. Discrimination of DPRK M5.1 February 12th, 2013 Earthquake as Nuclear Test Using Analysis of Magnitude, Rupture Duration and Ratio of Seismic Energy and Moment

    NASA Astrophysics Data System (ADS)

    Salomo Sianipar, Dimas; Subakti, Hendri; Pribadi, Sugeng

    2015-04-01

    On February 12th, 2013 morning at 02:57 UTC, there had been an earthquake with its epicenter in the region of North Korea precisely around Sungjibaegam Mountains. Monitoring stations of the Preparatory Commission for the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO) and some other seismic network detected this shallow seismic event. Analyzing seismograms recorded after this event can discriminate between a natural earthquake or an explosion. Zhao et. al. (2014) have been successfully discriminate this seismic event of North Korea nuclear test 2013 from ordinary earthquakes based on network P/S spectral ratios using broadband regional seismic data recorded in China, South Korea and Japan. The P/S-type spectral ratios were powerful discriminants to separate explosions from earthquake (Zhao et. al., 2014). Pribadi et. al. (2014) have characterized 27 earthquake-generated tsunamis (tsunamigenic earthquake or tsunami earthquake) from 1991 to 2012 in Indonesia using W-phase inversion analysis, the ratio between the seismic energy (E) and the seismic moment (Mo), the moment magnitude (Mw), the rupture duration (To) and the distance of the hypocenter to the trench. Some of this method was also used by us to characterize the nuclear test earthquake. We discriminate this DPRK M5.1 February 12th, 2013 earthquake from a natural earthquake using analysis magnitude mb, ms and mw, ratio of seismic energy and moment and rupture duration. We used the waveform data of the seismicity on the scope region in radius 5 degrees from the DPRK M5.1 February 12th, 2013 epicenter 41.29, 129.07 (Zhang and Wen, 2013) from 2006 to 2014 with magnitude M ≥ 4.0. We conclude that this earthquake was a shallow seismic event with explosion characteristics and can be discriminate from a natural or tectonic earthquake. Keywords: North Korean nuclear test, magnitude mb, ms, mw, ratio between seismic energy and moment, ruptures duration

  4. Prediction of long-period ground motions from huge subduction earthquakes in Osaka, Japan

    NASA Astrophysics Data System (ADS)

    Kawabe, H.; Kamae, K.

    2008-04-01

    There is a high possibility of reoccurrence of the Tonankai and Nankai earthquakes along the Nankai Trough in Japan. It is very important to predict the long-period ground motions from the next Tonankai and Nankai earthquakes with moment magnitudes of 8.1 and 8.4, respectively, to mitigate their disastrous effects. In this study, long-period (>2.5 s) ground motions were predicted using an earthquake scenario proposed by the Headquarters for Earthquake Research Promotion in Japan. The calculations were performed using a fourth-order finite difference method with a variable spacing staggered-grid in the frequency range 0.05 0.4 Hz. The attenuation characteristics ( Q) in the finite difference simulations were assumed to be proportional to frequency ( f) and S-wave velocity ( V s) represented by Q = f · V s / 2. Such optimum attenuation characteristic for the sedimentary layers in the Osaka basin was obtained empirically by comparing the observed motions during the actual M5.5 event with the modeling results. We used the velocity structure model of the Osaka basin consisting of three sedimentary layers on bedrock. The characteristics of the predicted long-period ground motions from the next Tonankai and Nankai earthquakes depend significantly on the complex thickness distribution of the sediments inside the basin. The duration of the predicted long-period ground motions in the city of Osaka is more than 4 min, and the largest peak ground velocities (PGVs) exceed 80 cm/s. The predominant period is 5 to 6 s. These preliminary results indicate the possibility of earthquake damage because of future subduction earthquakes in large-scale constructions such as tall buildings, long-span bridges, and oil storage tanks in the Osaka area.

  5. Imaging of the Rupture Zone of the Magnitude 6.2 Karonga Earthquake of 2009 using Electrical Resistivity Surveys

    NASA Astrophysics Data System (ADS)

    Clappe, B.; Hull, C. D.; Dawson, S.; Johnson, T.; Laó-Dávila, D. A.; Abdelsalam, M. G.; Chindandali, P. R. N.; Nyalugwe, V.; Atekwana, E. A.; Salima, J.

    2015-12-01

    The 2009 Karonga earthquakes occurred in an area where active faults had not previously been known to exist. Over 5000 buildings were destroyed in the area and at least 4 people lost their lives as a direct result of the 19th of December magnitude 6.2 earthquake. The earthquake swarms occurred in the hanging wall of the main Livingstone border fault along segmented, west dipping faults that are synthetic to the Livingstone fault. The faults have a general trend of 290-350 degrees. Electrical resistivity surveys were conducted to investigate the nature of known rupture and seismogenic zones that resulted from the 2009 earthquakes in the Karonga, Malawi area. The goal of this study was to produce high-resolution images below the epicenter and nearby areas of liquefaction to determine changes in conductivity/resistivity signatures in the subsurface. An Iris Syscal Pro was utilized to conduct dipole-dipole resistivity measurements below the surface of soil at farmlands at 6 locations. Each transect was 710 meters long and had an electrode spacing of 10 meters. RES2DINV software was used to create 2-D inversion images of the rupture and seismogenic zones. We were able to observe three distinct geoelectrical layers to the north of the rupture zone and two south of the rupture zone with the discontinuity between the two marked by the location of the surface rupture. The rupture zone is characterized by ~80-meter wide area of enhanced conductivity, 5 m thick underlain by a more resistive layer dipping west. We interpret this to be the result of fine grain sands and silts brought up from depth to near surface as a result of shearing along the fault rupture or liquefaction. Electrical resistivity surveys are valuable, yet under-utilized tools for imaging near-surface effects of earthquakes.

  6. Coseismic and postseismic velocity changes detected by Passive Image Interferometry: Comparison of five strong earthquakes (magnitudes 6.6 - 6.9) and one great earthquake (magnitude 9.0) in Japan

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Wegler, Ulrich; Shiomi, Katsuhiko; Nakahara, Hisashi

    2015-04-01

    We analyzed ambient seismic noise near five strong onshore crustal earthquakes in Japan as well as for the great Tohoku offshore earthquake. Green's functions were computed for station pairs (cross-correlations) as well as for different components of a single station (single-station cross-correlations) using a filter bank of five different bandpass filters between 0.125 Hz and 4 Hz. Noise correlations for different time periods were treated as repeated measurements and coda wave interferometry was applied to estimate coseismic as well as postseismic velocity changes. We used all possible component combinations and analyzed periods from a minimum of 3.5 years (Iwate region) up to 8.25 years (Niigata region). Generally, the single-station cross-correlation and station pair cross-correlation show similar results, but the single station method is more reliable for higher frequencies (f > 0.5 Hz), whereas the station pair method is more reliable for lower frequencies (f < 0.5 Hz). For all six earthquakes we found a similar behavior of the velocity change curve as a function of time. We observe coseismic velocity drops at the times of the respective earthquakes followed by postseismic recovery for all earthquakes. Additionally, most stations show a seasonal velocity variation. This seasonal variation was removed by a curve fitting and velocity changes of tectonic origin only were analyzed in our study. The postseismic velocity changes can be described by an exponential recovery model, where for all areas about half of the coseismic velocity drops recover on a time scale of the order of half a year. The other half of the coseismic velocity drops remain as a permanent change. The coseismic velocity drops are stronger at larger frequencies for all earthquakes. We assume that these changes are concentrated in the superficial layers but for some stations can also reach a few kilometers of depth. The coseismic velocity drops for the strong earthquakes (magnitudes 6.6 - 6

  7. The 7.2 magnitude earthquake, November 1975, Island of Hawaii

    USGS Publications Warehouse

    1976-01-01

    It was centered about 5 km beneath the Kalapana area on the southeastern coast of Hawaii, the largest island of the Hawaiian chain (Fig. 1) and was preceded by numerous foreshocks. The event was accompanied, or followed shortly, by a tsunami, large-scale ground movemtns, hundreds of aftershocks, an eruption in the summit caldera of Kilauea Volcano. The earthquake and the tsunami it generated produced about 4.1 million dollars in property damage, and the tsumani caused two deaths. Although we have some preliminary findings about the cause and effects of the earthquake, detailed scientific investigations will take many more months to complete. This article is condensed from a recent preliminary report (Tillings an others 1976)

  8. Non-extensive statistical physics analysis of earthquake magnitude sequences in North Aegean Trough, Greece

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Vallianatos, Filippos

    2017-06-01

    In a recent study, Papadakis et al. (Physica A 456: 135-144, 2016) investigate seismicity in Greece, using the non-extensive statistical physics formalism. Moreover, these authors examine the spatial distribution of the non-extensive parameter q M and show that for shallow seismicity, increase of q M coincides with strong events. However, their study also reveals low q M values along the North Aegean Trough, despite the presence of strong events during 1976-2009. Consequently, the present study further examines the temporal behaviour of parameters q M and A, to reveal their relation with the evolution of the earthquake sequence. Through temporal examination of these parameters, we aim to show that the seismogenic system of the North Aegean Trough presents high degree of interactions after strong earthquakes during the studied period. Our findings indicate that increase of q M signifies the existence of long-range correlations. If its value does not significantly decrease after a strong earthquake (i.e. M ≥ 5) then the studied area has not reached the state of equilibrium.

  9. Predicted magnitudes and colors from cool-star model atmospheres

    NASA Technical Reports Server (NTRS)

    Johnson, H. R.; Steiman-Cameron, T. Y.

    1981-01-01

    An intercomparison of model stellar atmospheres and observations of real stars can lead to a better understanding of the relationship between the physical properties of stars and their observed radiative flux. In this spirit we have determined wide-band and narrow-band magnitudes and colors for a subset of models of K and M giant and supergiant stars selected from the grid of 40 models by Johnson, Bernat and Krupp (1980) (hereafter referred to as JBK). The 24 models selected have effective temperatures of 4000, 3800, 3600, 3400, 3200, 3000, 2750 and 2500 K and log g = 0, 1 or 2. Emergent energy fluxes (erg/ sq cm s A) were calculated at 9140 wavelengths for each model. These computed flux curves were folded through the transmission functions of Wing's 8-color system (Wing, 1971; White and Wing, 1978) and through Johnson's (1965) wide-band (BVRIJKLM) system. The calibration of the resultant magnitudes was made by using the absolute calibration of the flux curve of Vega by Schild, et al. (1971).

  10. Risk and return: evaluating Reverse Tracing of Precursors earthquake predictions

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2010-09-01

    In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

  11. Probability of inducing given-magnitude earthquakes by perturbing finite volumes of rocks

    NASA Astrophysics Data System (ADS)

    Shapiro, Serge A.; Krüger, Oliver S.; Dinske, Carsten

    2013-07-01

    Fluid-induced seismicity results from an activation of finite rock volumes. The finiteness of perturbed volumes influences frequency-magnitude statistics. Previously we observed that induced large-magnitude events at geothermal and hydrocarbon reservoirs are frequently underrepresented in comparison with the Gutenberg-Richter law. This is an indication that the events are more probable on rupture surfaces contained within the stimulated volume. Here we theoretically and numerically analyze this effect. We consider different possible scenarios of event triggering: rupture surfaces located completely within or intersecting only the stimulated volume. We approximate the stimulated volume by an ellipsoid or cuboid and derive the statistics of induced events from the statistics of random thin flat discs modeling rupture surfaces. We derive lower and upper bounds of the probability to induce a given-magnitude event. The bounds depend strongly on the minimum principal axis of the stimulated volume. We compare the bounds with data on seismicity induced by fluid injections in boreholes. Fitting the bounds to the frequency-magnitude distribution provides estimates of a largest expected induced magnitude and a characteristic stress drop, in addition to improved estimates of the Gutenberg-Richter a and b parameters. The observed frequency-magnitude curves seem to follow mainly the lower bound. However, in some case studies there are individual large-magnitude events clearly deviating from this statistic. We propose that such events can be interpreted as triggered ones, in contrast to the absolute majority of the induced events following the lower bound.

  12. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity.

  13. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  14. Predicting earthquakes by analyzing accelerating precursory seismic activity

    USGS Publications Warehouse

    Varnes, D.J.

    1989-01-01

    During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

  15. Quantifying Near-Field Deformation of Large Magnitude Strike-Slip Earthquakes using Optical Image Correlation: Implications for Empirical Earthquake Scaling Laws and Safeguarding the Built Environment

    NASA Astrophysics Data System (ADS)

    Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.

    2016-12-01

    Measurements of co-seismic deformation from surface rupturing events are an important source of information for faulting mechanics and seismic hazard analysis. However, direct measurements of the near-field surface deformation pattern have proven difficult. Traditional field surveys typically cannot observe the diffuse and subtle inelastic strain accommodated over wide fault-zones, while InSAR data typically decorrelates close to the surface rupture due to high phase gradients leaving 1-2 km wide gaps of data. Using sub-pixel, optical image correlation of pre- and post-event air photos, we quantify the near-field, surface deformation pattern of the 1992 Mw= 7.3 Landers and 1999 Mw = 7.1 Hector Mine earthquakes. This technique allows spatially complete measurement of the surface co-seismic slip along the entire surface rupture, as well as the magnitude and width of distributed deformation. For both events we find our displacement measurements are systematically larger than those from field surveys, indicating the presence of significant distributed, `off-fault' deformation. Here we show that the Landers and Hector Mine earthquakes accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean shear width of 154 m and 121 m, respectively, with significant spatial variability. We also find positive, yet weak correlations of the magnitude of distributed deformation with the type of near-surface lithology and degree of macroscopic fault zone complexity. We envision additional measurements of future ruptures will better constrain what physical properties of the surface rupture are important controls on the distribution of strain, necessary in order to reliably estimate the amount of expected distributed shear along a given fault segment. Our results have basic implications for the accuracy of empirical scaling relations of earthquake surface ruptures derived from field measurements, understanding apparent discrepancies

  16. Application of decision trees to the analysis of soil radon data for earthquake prediction.

    PubMed

    Zmazek, B; Todorovski, L; Dzeroski, S; Vaupotic, J; Kobal, I

    2003-06-01

    Different regression methods have been used to predict radon concentration in soil gas on the basis of environmental data, i.e. barometric pressure, soil temperature, air temperature and rainfall. Analyses of the radon data from three stations in the Krsko basin, Slovenia, have shown that model trees outperform other regression methods. A model has been built which predicts radon concentration with a correlation of 0.8, provided it is influenced only by the environmental parameters. In periods with seismic activity this correlation is much lower. This decrease in predictive accuracy appears 1-7 days before earthquakes with local magnitude 0.8-3.3.

  17. Kinematic earthquake source inversion and tsunami runup prediction with regional geophysical data

    NASA Astrophysics Data System (ADS)

    Melgar, D.; Bock, Y.

    2015-05-01

    Rapid near-source earthquake source modeling relying only on strong motion data is limited by instrumental offsets and magnitude saturation, adversely affecting subsequent tsunami prediction. Seismogeodetic displacement and velocity waveforms estimated from an optimal combination of high-rate GPS and strong motion data overcome these limitations. Supplementing land-based data with offshore wave measurements by seafloor pressure sensors and GPS-equipped buoys can further improve the image of the earthquake source and prediction of tsunami extent, inundation, and runup. We present a kinematic source model obtained from a retrospective real-time analysis of a heterogeneous data set for the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake. Our model is consistent with conceptual models of subduction zones, exhibiting depth dependent behavior that is quantified through frequency domain analysis of slip rate functions. The stress drop distribution is found to be significantly more correlated with aftershock locations and mechanism types when off-shore data are included. The kinematic model parameters are then used as initial conditions in a fully nonlinear tsunami propagation analysis. Notably, we include the horizontal advection of steeply sloping bathymetric features. Comparison with post-event on-land survey measurements demonstrates that the tsunami's inundation and runup are predicted with considerable accuracy, only limited in scale by the resolution of available topography and bathymetry. We conclude that it is possible to produce credible and rapid, kinematic source models and tsunami predictions within minutes of earthquake onset time for near-source coastal regions most susceptible to loss of life and damage to critical infrastructure, regardless of earthquake magnitude.

  18. Did the November 17, 2009 Queen Charlotte Island (QCI) earthquake fill a predicted seismic gap?

    NASA Astrophysics Data System (ADS)

    Vasudevan, K.; Eaton, D. W.; Iverson, A.

    2010-12-01

    Seismicity in the Queen Charlotte Fault (QCF) zone occurs along the transform boundary between the Pacific and North American lithospheric plates and is the region where the largest recorded earthquake in Canada (Ms = 8.1) occurred, on August 22, 1949. Right-lateral relative motion across the QCF, in conjunction with minor convergence, has been suggested to play a role in the source characteristics of earthquakes in this region. A segment of the QCF between the inferred rupture zone of the 1949 earthquake and that of a magnitude 7.4 earthquake in 1970 has been identified as seismic gap that, if fully ruptured, is capable of producing a M ~ 7 earthquake. On November 17, 2009 a Mw 6.6 earthquake occurred within this seismicity gap and was well recorded by regional seismograph stations in Canada and the U.S., including three recently installed temporary broadband seismograph stations in northern Alberta. The distribution of aftershocks from the 2009 earthquake, as well as maps of calculated Coulomb stresses from the previous events, are compatible with the seismic gap hypothesis. In addition, we have computed a seismic moment tensor for this event by least-squares waveform fitting, primarily surface waves, which shows a predominantly strike-slip focal mechanism. Our integrated results of source parameters and Coulomb failure stress changes provide the first direct confirmation that the 2009 event occurred within the predicted seismic gap between the 1949 and 1970 earthquakes. This evidence is important for hazard assessment in this region where offshore oil and gas drilling has been proposed.

  19. Slip Rates, Recurrence Intervals and Earthquake Event Magnitudes for the southern Black Mountains Fault Zone, southern Death Valley, California

    NASA Astrophysics Data System (ADS)

    Fronterhouse Sohn, M.; Knott, J. R.; Bowman, D. D.

    2005-12-01

    The normal-oblique Black Mountain Fault zone (BMFZ) is part of the Death Valley fault system. Strong ground-motion generated by earthquakes on the BMFZ poses a serious threat to the Las Vegas, NV area (pop. ~1,428,690), the Death Valley National Park (max. pop. ~20,000) and Pahrump, NV (pop. 30,000). Fault scarps offset Holocene alluvial-fan deposits along most of the 80-km length of the BMFZ. However, slip rates, recurrence intervals, and event magnitudes for the BMFZ are poorly constrained due to a lack of age control. Also, Holocene scarp heights along the BMFZ range from <1 m to >6 m suggesting that geomorphic sections have different earthquake histories. Along the southernmost section, the BMFZ steps basinward preserving three post-late Pleistocene fault scarps. Surveys completed with a total station theodolite show scarp heights of 5.5, 5.0 and 2 meters offsetting the late Pleistocene, early to middle Holocene, to middle-late Holocene surfaces, respectively. Regression plots of vertical offset versus maximum scarp angle suggest event ages of <10 - 2 ka with a post-late Pleistocene slip rate of 0.1mm/yr to 0.3 mm/yr and recurrence of <3300 years/event. Regression equations for the estimated geomorphically constrained rupture length of the southernmost section and surveyed event displacements provides estimated moment magnitudes (Mw) between 6.6 and 7.3 for the BMFZ.

  20. Paleomagnetic Definition of Crustal Segmentation, Quaternary Block Rotations and Limits on Earthquake Magnitudes in Northwestern Metropolitan Los Angeles

    NASA Astrophysics Data System (ADS)

    Levi, S.; Yeats, R. S.; Nabelek, J.

    2004-12-01

    Paleomagnetic studies of the Pliocene-Quaternary Saugus Formation, in the San Fernando Valley and east Ventura Basin, show that the crust is segmented into small domains, 10-20 km in linear dimension, identified by rotation of reverse-fault blocks. Two domains, southwest and adjacent to the San Gabriel fault, are rotated clockwise: 1) The Magic Mountain domain, 30 +/- 5 degrees, and 2) the Merrick syncline domain, 34 +/- 6 degrees. The Magic Mountain domain has rotated since 1 Ma. Both rotated sections occur in hangingwalls of active reverse faults: the Santa Susana and San Fernando faults, respectively. Two additional domains are unrotated: 1) The Van Norman Lake domain, directly south of the Santa Susana fault, and 2) the Soledad Canyon domain in the San Gabriel block immediately across the San Gabriel fault from Magic Mountain, suggesting that the San Gabriel fault might be a domain boundary. Plio-Pleistocene fragmentation and clockwise rotations continue at present, based on geodetic data, and represent crustal response to diffuse, oblique dextral shearing within the San Andreas fault system. The horizontal dimensions of the blocks are similar to the thickness of the seismogenic layer. The maximum magnitude of an earthquake based on this size of blocks is Mw = 6.7, comparable to the 1971 San Fernando and 1994 Northridge earthquakes and consistent with paleoseismic trenching and surface ruptures of the 1971 earthquake. The paleomagnetic results suggest that the blocks have retained their configuration for the past \\~ 0.8 million years. It is unlikely that multiple blocks in the study area combined to trigger much larger shocks during this period, in contrast to adjacent regions where events with magnitudes greater than 7 have been postulated based on paleoseismic excavations.

  1. Does Severity of Depression Predict Magnitude of Productivity Loss?

    PubMed Central

    Beck, Arne; Crain, A. Lauren; Solberg, Leif I.; Unützer, Jürgen; Glasgow, Russell E.; Maciosek, Michael V.; Whitebird, Robin

    2014-01-01

    PURPOSE Depression is associated with lowered work functioning, including absence, productivity impairment, and decreased job retention. However, few studies have examined depression symptoms across a continuum of severity in relationship to the magnitude of work impairment in a large and heterogeneous patient population. This study assessed the relationship between depression symptom severity and productivity loss among patients initiated on antidepressants. METHODS Data were obtained from patients participating in the DIAMOND Initiative (Depression Improvement Across Minnesota: Offering a New Direction), a statewide quality improvement collaborative to provide enhanced depression care. Patients newly started on antidepressants were surveyed with the Patient Health Questionnaire (PHQ-9), a measure of depression symptom severity, the Work Productivity and Activity Impairment questionnaire (WPAI) a measure of productivity loss, and items on health status and demographics. RESULTS We analyzed data from the 771 patients who reported current employment. General linear models adjusting for demographics and health status showed a significant linear, monotonic relationship between depression symptom severity and productivity loss (p<.0001). Even minor levels of depression symptoms were associated with decrements in work function. Greater productivity loss also was associated with full-time vs. part-time employment status (p<.001), fair or poor health (p=.05), and “not coupled” marital status (p=.07). CONCLUSIONS This study illustrated the relationship between the severity of depression symptoms and work function, suggesting that even minor levels of depression are associated with productivity loss. Employers may find it beneficial to invest in effective treatments for employees across the continuum of depression severity. PMID:25295792

  2. Sun-earth environment study to understand earthquake prediction

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.

    2007-05-01

    Earthquake prediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquake prediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

  3. Do moderate magnitude earthquake generate seismically induced ground effects? The case study of the M w = 5.16, 29th December 2013 Matese earthquake (southern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Valente, Ettore; Ascione, A.; Ciotoli, G.; Cozzolino, M.; Porfido, S.; Sciarra, A.

    2017-06-01

    Seismically induced ground effects characterize moderate to high magnitude seismic events, whereas they are not so common during seismic sequences of low to moderate magnitude. A low to moderate magnitude seismic sequence with a M w = 5.16 ± 0.07 main event occurred from December 2013 to February 2014 in the Matese ridge area, in the southern Apennines mountain chain. In the epicentral area of the M w = 5.16 main event, which happened on December 29th 2013 in the southeastern part of the Matese ridge, field surveys combined with information from local people and reports allowed the recognition of several earthquake-induced ground effects. Such ground effects include landslides, hydrological variations in local springs, gas flux, and a flame that was observed around the main shock epicentre. A coseismic rupture was identified in the SW fault scarp of a small-sized intermontane basin (Mt. Airola basin). To detect the nature of the coseismic rupture, detail scale geological and geomorphological investigations, combined with geoelectrical and soil gas prospections, were carried out. Such a multidisciplinary study, besides allowing reconstruction of the surface and subsurface architecture of the Mt. Airola basin, and suggesting the occurrence of an active fault at the SW boundary of such basin, points to the gravitational nature of the coseismic ground rupture. Based on typology and spatial distribution of the ground effects, an intensity I = VII-VIII is estimated for the M w = 5.16 earthquake according to the ESI-07 scale, which affected an area of at least 90 km2.

  4. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes

    NASA Astrophysics Data System (ADS)

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016), 10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  5. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes.

    PubMed

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016)10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  6. [Comment on “A misuse of public funds: U.N. support for geomagnetic forecasting of earthquakes and meteorological disasters”] Comment: Earthquake prediction is worthy of study

    NASA Astrophysics Data System (ADS)

    Freund, Friedmann

    Imagine a densely populated region in the contiguous United States haunted over the past 25 years by nine big earthquakes of magnitudes 5.5 to 7.8, killing hundreds of thousands of people. Imagine further that in a singularly glorious instance a daring prediction effort, based on some scientifically poorly understood natural phenomena, led to the evacuation of a major city just 13 hours before an M = 7.8 earthquake hit. None of the inhabitants of the evacuated city died, while in the surrounding, nonevacuated communities 240,000 were killed and about 600,000 seriously injured. Imagine at last that, tragically, the prediction of the next earthquake of a similar magnitude failed, as well as the following one, at great loss of life.If this were an American scenario, the scientific community and the public at large would buzz with the glory of that one successful, life-saving earthquake prediction effort and with praise for American ingenuity The fact that the next predictions failed would likely have energized the public, the political bodies, the scientists, and the funding agencies alike to go after a recalcitrant Earth, to poke into her deep secrets with all means at the scientists' disposal, and to retrieve even the faintest signals that our restless planet may send out prior to unleashing her deadly punches.

  7. Earthquake prediction in Japan and natural time analysis of seismicity

    NASA Astrophysics Data System (ADS)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  8. Current progress in using multiple electromagnetic indicators to determine location, time, and magnitude of earthquakes in California and Peru (Invited)

    NASA Astrophysics Data System (ADS)

    Bleier, T. E.; Dunson, C.; Roth, S.; Heraud, J.; Freund, F. T.; Dahlgren, R.; Bryant, N.; Bambery, R.; Lira, A.

    2010-12-01

    showed similar increases in 30 minute averaged energy excursions, but the 30 minute averages had a disadvantage in that they reduced the signal to noise ratio over the individual pulse counting method. In other electromagnetic monitoring methods, air conductivity instrumentation showed major changes in positive air-borne ions observed near the Alum Rock and Tacna sites, peaking during the 24 hours prior to the earthquake. The use of GOES (geosynchronous) satellite infra red (IR) data showed that an unusual apparent “night time heating” occurred in an extended area within 40+ km. of the Alum Rock site, and this IR signature peaked around the time of the magnetic pulse count peak. The combination of these 3 indicators (magnetic pulse counts, air conductivity, and IR night time heating) may be the start in determining the time (within 1-2 weeks), location (within 20-40km) and magnitude (within +/- 1 increment of Richter magnitude) of earthquake greater than M5.4

  9. Testing an Earthquake Prediction Algorithm: The 2016 New Zealand and Chile Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2017-05-01

    The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquake prediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

  10. Radon measurements for earthquake prediction in northern India

    SciTech Connect

    Singh, B.; Virk, H.S. )

    1992-01-01

    Earthquake prediction is based on the observation of precursory phenomena, and radon has emerged as a useful precursor in recent years. In India, where 55% of the land area is in active seismic zones, considerable destruction was caused by the earthquakes of Kutch (1819), Shillong (1897), Kangra (1905), Bihar-Nepal (1934), Assam (1956), Koyna (1967), Bihar-Nepal (1988), and Uttarkashi (1991). Radon ([sup 222]Rn) is produced by the decay of radium ([sup 226]Ra) in the uranium decay series and is present in trace amounts almost everywhere on the earth, being distributed in soil, groundwater, and lower levels of atmosphere. The purpose of this study is to find the value in radon monitoring for earthquake prediction.

  11. Scientific investigation of macroscopic phenomena before the 2008 Wenchuan earthquake and its implication to prediction and tectonics

    NASA Astrophysics Data System (ADS)

    Huang, F.; Yang, Y.; Pan, B.

    2013-12-01

    tectonic/faults near the epicentral area. According to the statistic relationship, VI-VII degree intensity in meizoseismal area is equivalent to Magnitude 5. That implied that, generally, macroscopic anomaly easily occurred before earthquakes with magnitude more than 5 in the near epicenteral area. This information can be as pendent clues of earthquake occurrence in a tectonic area. Based on the above scientific investigation and statistic research we recalled other historical earthquakes occurred from 1937 to 1996 in Chinese mainland and got the similar results (Compilation of macroscopic anomalies before earthquakes, published by seismological press, 2009). This can be as one of important basic data to earthquake prediction. This work was supported by NSFC project No. 41274061.

  12. Global Correlation between the Size of Subduction Earthquakes and the Magnitude of Crustal Normal Fault Aftershocks in the Forearc

    NASA Astrophysics Data System (ADS)

    Aron, F.; Allmendinger, R. W.; Jensen Siles, E.

    2013-12-01

    subduction event. Given the relatively large magnitude and shallow depth of these triggered earthquakes, understanding their behavior in the context of the subduction seismic cycle becomes important for seismic hazards evaluation. In general, the Mw 7.0 crustal events in both Chile and Japan struck in sparsely populated areas with relatively good building codes and basic infrastructure, though there was a triggered normal fault with surface rupture just 60 km south from the Fukushima nuclear plant. However, as population increases with concomitant land use and development, large crustal aftershocks pose a significant hazard to critical infrastructure. This documented correlation between size of the main shock and that of the intraplate aftershocks, along with field studies of these faults, suggests that the forearc structures should be incorporated in any seismic hazard assessment of subduction zones regions.

  13. The optimum Bayesian probability procedure and the prediction of strong earthquakes felt in Mexico city

    NASA Astrophysics Data System (ADS)

    Ferraes, Sergio G.

    1988-12-01

    Bayes' theorem has possible application to earthquake prediction because it can be used to represent the dependence of the inter-arrival time ( T) of the next event on magnitude ( M) of the preceding earthquake ( Ferraes, 1975; Bufe et al., 1977; Shimazaki and Nakata, 1980; Sykes and Quittmeyer, 1981). First, we derive the basic formulas, assuming that the earthquake process behaves as a Poisson process. Under this assumption the likelihood probabilities are determined by the Poisson distribution ( Ferraes, 1985) after which we introduce the conjugate family of Gamma prior distributions. Finally, to maximize the posterior Bayesian probability P(τ/M) we use calculus and introduce the analytical conditiond/{dtau }P(tau /M) = 0. Subsequently we estimate the occurrence of the next future large earthquake to be felt in Mexico City. Given the probabilistic model, the prediction is obtained from the data set that include all events with M≥7.5 felt in Mexico City from 1900 to 1985. These earthquakes occur in the Middle-America trench, along Mexico, but are felt in Mexico City. To see the full significance of the analysis, we give the result using two models: (1) The Poisson-Gamma, and (2) The Poisson-Exponential (a special case of the Gamma). Using the Poisson-Gamma model, the next expected event will occur in the next time interval τ=2.564 years from the last event (occurred on September 19, 1985) or equivalently, the expected event will occur approximately in April, 1988. Using the Poisson-Exponential model, the next expected damaging earthquake will occur in the next time interval τ'=2.381 years from the last event, or equivalently in January, 1988. It should be noted that very strong agreement exists between the two predicted occurrence times, using both models.

  14. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Small Business Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of small businesses in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each business establishment size category to each Instrumental Intensity level. The analysis concerns the direct effect of the earthquake on small businesses. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by business establishment size.

  15. Determination of Love- and Rayleigh-Wave Magnitudes for Earthquakes and Explosions and Other Studies

    DTIC Science & Technology

    2012-12-30

    However, tectonic release (Toksöz and Kehrer, 1972) near the explosion source often results in Love Approved for public release; distribution is...bias in magnitude estimation. Significant heterogeneities along the plate boundaries are the most likely causes of such scattering. We have applied...areas with strong lateral velocity variations, including active tectonic belts, continental shelves etc. Strike-slip mechanisms are usually better

  16. Impact of channel-like erosion patterns on the frequency-magnitude distribution of earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-07-01

    Reactive flow at depth (either related to underground activities, like enhancement of hydrocarbon recovery and CO2 storage, or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern. In a fractal structure of rupture process, we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in frequency-magnitude distribution (FMD). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downward translation of FMD: the slope of FMD (b-value) remains unchanged. The parameter-space study shows that the increase of b-value (of 0.08) is statistically significant for optimum characteristics of the erosion pattern with spacing to length ratio of the order of ˜1/40: large-magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  17. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... proposed earthquake predictions, on the completeness and scientific validity of the available data related... Council will receive several briefings on the history and current state of scientific investigations of..., and will be asked to advise the USGS on priorities for instrumentation and scientific investigations...

  18. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  19. Earthquake Predictability: Results From Aggregating Seismicity Data And Assessment Of Theoretical Individual Cases Via Synthetic Data

    NASA Astrophysics Data System (ADS)

    Adamaki, A.; Roberts, R.

    2016-12-01

    For many years an important aim in seismological studies has been forecasting the occurrence of large earthquakes. Despite some well-established statistical behavior of earthquake sequences, expressed by e.g. the Omori law for aftershock sequences and the Gutenburg-Richter distribution of event magnitudes, purely statistical approaches to short-term earthquake prediction have in general not been successful. It seems that better understanding of the processes leading to critical stress build-up prior to larger events is necessary to identify useful precursory activity, if this exists, and statistical analyses are an important tool in this context. There has been considerable debate on the usefulness or otherwise of foreshock studies for short-term earthquake prediction. We investigate generic patterns of foreshock activity using aggregated data and by studying not only strong but also moderate magnitude events. Aggregating empirical local seismicity time series prior to larger events observed in and around Greece reveals a statistically significant increasing rate of seismicity over 20 days prior to M>3.5 earthquakes. This increase cannot be explained by tempo-spatial clustering models such as ETAS, implying genuine changes in the mechanical situation just prior to larger events and thus the possible existence of useful precursory information. Because of tempo-spatial clustering, including aftershocks to foreshocks, even if such generic behavior exists it does not necessarily follow that foreshocks have the potential to provide useful precursory information for individual larger events. Using synthetic catalogs produced based on different clustering models and different presumed system sensitivities we are now investigating to what extent the apparently established generic foreshock rate acceleration may or may not imply that the foreshocks have potential in the context of routine forecasting of larger events. Preliminary results suggest that this is the case, but

  20. Impact of Channel-like Erosion Patterns on the Frequency-Magnitude Distribution of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-12-01

    Reactive flow at depth (either related to underground activities like enhancement of hydrocarbon recovery, CO2 storage or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern (Szymczak & Ladd, JGR, 2009). In a fractal structure of rupture process (Ide & Aochi, JGR, 2005), we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in Frequency-Magnitude Distribution FMD (Rohmer & Aochi, GJI, 2015). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downwards translation of FMD: the slope of FMD (b-value) remains unchanged. An in-depth parametric study was carried out by considering different pattern characteristics: spacing S varying from 0 to 100 and length L from 50 to 800 and fixing the width w=1. The figure shows that there is a region of optimum channels' characteristics for which the b-value of the Gutenberg Richter law is significantly modified with p-value ~10% (corresponding to area with red-coloured boundaries) given spacing to length ratio of the order of ~1/40: large magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. The decrease of the b-value between intact and altered fractures can reach values down to -0.08. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  1. Determination of fault planes and dimensions for low-magnitude earthquakes - A case study in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Mozziconacci, Laetitia; Delouis, Bertrand; Huang, Bor-Shouh

    2017-03-01

    We present a modified version of the FMNEAR method for determining the focal mechanisms and fault plane geometries of small earthquakes. Our improvements allow determination of the fault plane and dimensions using the near-field components of only a few local records. The limiting factor is the number of stations: a minimum of five to six stations is required to discriminate between the fault plane and auxiliary plane. This limitation corresponds to events with magnitudes ML > 3.5 in eastern Taiwan, but strongly depends on station coverage in the study area. Once a fault plane is identified, it is provided along with its source time function and fault slip distribution. The proposed approach is validated by synthetic tests, and applied to real cases from a seismic crisis that occurred in the Longitudinal Valley of eastern Taiwan in April 2006. The fault geometries and faulting types of test events closely match the fault system of the main shock and reveal a minor one inside the faults zone of the Longitudinal Valley. Tested on a larger scale, this approach enables the fault geometries of main and secondary fault systems to be recovered from small earthquakes, allowing subsurface faults to be mapped in detail without waiting for a large, damaging event.

  2. Tsunami forecasting and warning in the Australian region for the Magnitude 8.8 Chilean Earthquake of 27 February 2010

    NASA Astrophysics Data System (ADS)

    Allen, S. C.; Simanjuntak, A.; Greenslade, D. J.

    2010-12-01

    The Joint Australian Tsunami Warning Centre (JATWC) is responsible for issuing tsunami warnings within the Australian region. To a large extent, these are based on numerical guidance provided by the T2 tsunami scenario database, which has recently been implemented for operational use within the JATWC. During an event, the closest T2 scenario is selected and modelled tsunami amplitude values near the Australian coastline from that scenario are used as a proxy for impact in order to derive an appropriate level of warning. The Chilean earthquake of 27 February 2010 and the associated tsunami were locally devastating and resulted in the issuance of public warnings of possible tsunami impact on a ocean-wide scale, including for a large part of the Australian coastline. In this presentation we will evaluate the application of T2 and the resulting tsunami warnings in the Australian region for this event. This evaluation will include comparisons with sea-level observations and assessment of the tsunami forecast. Hindsight knowledge shows that the actual earthquake rupture of the event was quite different to the pre-computed ruptures within the T2 scenario database for events of this magnitude. Alternative tsunami model simulations are therefore computed, with ruptures more closely resembling the actual event. The resulting tsunami forecasts and warnings will be examined to assess the implications for tsunami warnings in the Australian region.

  3. By How Much Can Physics-Based Earthquake Simulations Reduce the Uncertainties in Ground Motion Predictions?

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Wang, F.

    2014-12-01

    Probabilistic seismic hazard analysis (PSHA) is the scientific basis for many engineering and social applications: performance-based design, seismic retrofitting, resilience engineering, insurance-rate setting, disaster preparation, emergency response, and public education. The uncertainties in PSHA predictions can be expressed as an aleatory variability that describes the randomness of the earthquake system, conditional on a system representation, and an epistemic uncertainty that characterizes errors in the system representation. Standard PSHA models use empirical ground motion prediction equations (GMPEs) that have a high aleatory variability, primarily because they do not account for the effects of crustal heterogeneities, which scatter seismic wavefields and cause local amplifications in strong ground motions that can exceed an order of magnitude. We show how much this variance can be lowered by simulating seismic wave propagation through 3D crustal models derived from waveform tomography. Our basic analysis tool is the new technique of averaging-based factorization (ABF), which uses a well-specified seismological hierarchy to decompose exactly and uniquely the logarithmic excitation functional into a series of uncorrelated terms that include unbiased averages of the site, path, hypocenter, and source-complexity effects (Feng & Jordan, Bull. Seismol. Soc. Am., 2014, doi:10.1785/0120130263). We apply ABF to characterize the differences in ground motion predictions between the standard GMPEs employed by the National Seismic Hazard Maps and the simulation-based CyberShake hazard model of the Southern California Earthquake Center. The ABF analysis indicates that, at low seismic frequencies (< 1 Hz), CyberShake site and path effects unexplained by the GMPEs account 40-50% of total residual variance. Therefore, accurate earthquake simulations have the potential for reducing the aleatory variance of the strong-motion predictions by about a factor of two, which would

  4. Non-extensive statistical physics applied to heat flow and the earthquake frequency-magnitude distribution in Greece

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2016-08-01

    This study investigates seismicity in Greece and its relation to heat flow, based on the science of complex systems. Greece is characterised by a complex tectonic setting, which is represented mainly by active subduction, lithospheric extension and volcanism. The non-extensive statistical physics formalism is a generalisation of Boltzmann-Gibbs statistical physics and has been successfully used for the analysis of a variety of complex systems, where fractality and long-range interactions are important. Consequently, in this study, the frequency-magnitude distribution analysis was performed in a non-extensive statistical physics context, and the non-extensive parameter, qM, which is related to the frequency-magnitude distribution, was used as an index of the physical state of the studied area. Examination of the spatial distribution of qM revealed its relation to the spatial distribution of seismicity during the period 1976-2009. For focal depths ≤40 km, we observe that strong earthquakes coincide with high qM values. In addition, heat flow anomalies in Greece are known to be strongly related to crustal thickness; a thin crust and significant heat flow anomalies characterise the central Aegean region. Moreover, the data studied indicate that high heat flow is consistent with the absence of strong events and consequently with low qM values (high b-values) in the central Aegean region and around the volcanic arc. However, the eastern part of the volcanic arc exhibits strong earthquakes and high qM values whereas low qM values are found along the North Aegean Trough and southwest of Crete, despite the fact that strong events are present during the period 1976-2009 in both areas.

  5. Rapid determination of P wave-based energy magnitude: Insights on source parameter scaling of the 2016 Central Italy earthquake sequence

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Bindi, Dino; Brondi, Piero; Di Giacomo, Domenico; Parolai, Stefano; Zollo, Aldo

    2017-05-01

    We propose a P wave based procedure for the rapid estimation of the radiated seismic energy, and a novel relationship for obtaining an energy-based local magnitude (MLe) measure of the earthquake size. We apply the new procedure to the seismic sequence that struck Central Italy in 2016. Scaling relationships involving seismic moment and radiated energy are discussed for the Mw 6.0 Amatrice, Mw 5.9 Ussita, and Mw 6.5 Norcia earthquakes, including 35 ML > 4 aftershocks. The Mw 6.0 Amatrice earthquake shows the highest apparent stress, and the observed differences among the three main events highlight the dynamic heterogeneity with which large earthquakes can occur in Central Italy. Differences between estimates of MLe and Mw allows identification of events which are characterized by a higher proportion of energy being transferred to seismic waves, providing important real-time indications of earthquakes shaking potential.

  6. Three Millennia of Seemingly Time-Predictable Earthquakes, Tell Ateret

    NASA Astrophysics Data System (ADS)

    Agnon, Amotz; Marco, Shmuel; Ellenblum, Ronnie

    2014-05-01

    Among various idealized recurrence models of large earthquakes, the "time-predictable" model has a straightforward mechanical interpretation, consistent with simple friction laws. On a time-predictable fault, the time interval between an earthquake and its predecessor is proportional to the slip during the predecessor. The alternative "slip-predictable" model states that the slip during earthquake rupture is proportional to the preceding time interval. Verifying these models requires extended records of high precision data for both timing and amount of slip. The precision of paleoearthquake data can rarely confirm or rule out predictability, and recent papers argue for either time- or slip-predictable behavior. The Ateret site, on the trace of the Dead Sea fault at the Jordan Gorge segment, offers unique precision for determining space-time patterns. Five consecutive slip events, each associated with deformed and offset sets of walls, are correlated with historical earthquakes. Two correlations are based on detailed archaeological, historical, and numismatic evidence. The other three are tentative. The offsets of three of the events are determined with high precision; the other two are not as certain. Accepting all five correlations, the fault exhibits a striking time-predictable behavior, with a long term slip rate of 3 mm/yr. However, the 30 October 1759 ~0.5 m rupture predicts a subsequent rupture along the Jordan Gorge toward the end of the last century. We speculate that earthquakres on secondary faults (the 25 November 1759 on the Rachaya branch and the 1 January 1837 on the Roum branch, both M≥7) have disrupted the 3 kyr time-predictable pattern.

  7. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    USGS Publications Warehouse

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-01-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  8. Strong motion PGA prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Zhengru; Tao, Xiaxin; Cui, Anping

    2016-05-01

    For regions without enough strong ground motion records, a seismology-based method is adopted to predict motion PGA (peak ground acceleration) values on rock sites with parameters from small earthquake data, recorded by regional broadband digital monitoring networks. Sichuan and Yunnan regions in southwestern China are selected for this case study. Five regional parameters of source spectrum and attenuation are acquired from a joint inversion by the micro-genetic algorithm. PGAs are predicted for earthquakes with moment magnitude (Mw) 5.0, 6.0, and 7.0 respectively and a series of distances. The result is compared with limited regional strong motion data in the corresponding interval Mw ± 0.5. Most of the results ideally pass through the data clusters, except the case of Mw7.0 in the Sichuan region, which shows an obvious slow attenuation due to a lack of observed data from larger earthquakes (Mw ≥ 7.0). For further application, the parameters are adopted in strong motion synthesis at two near-fault stations during the great Wenchuan Earthquake M8.0 in 2008.

  9. Analysing earthquake slip models with the spatial prediction comparison test

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Mai, P. Martin; Thingbaijam, Kiran K. S.; Razafindrakoto, Hoby N. T.; Genton, Marc G.

    2015-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (`model') and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  10. Rapid determination of P-wave-based Energy Magnitude: Insights on source parameter scaling of the 2016 Central Italy earthquake sequence

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Bindi, Dino; Brondi, Piero; Di Giacomo, Domenico; Parolai, Stefano; Zollo, Aldo

    2017-04-01

    In this study, we proposed a novel methodology for the rapid estimation of the earthquake size from the seismic radiated energy. Two relationships have been calibrated using recordings from 29 earthquakes of the 2009 L'Aquila and the 2012 Emilia seismic sequences in Italy. The first relation allows obtaining seismic radiated energy ER estimates using as proxy the time integral of squared P-waves velocities measured over vertical components, including regional attributes for describing the attenuation with distance. The second relation is a regression between the local magnitude and the radiated energy, which allows defining an energy-based local magnitude (MLe) compatible with ML for small earthquakes. We have applied the new procedure to the seismic sequence that struck central Italy in 2016. Scaling relationships involving seismic moment and radiated energy are discussed considering the Mw 6.0 Amatrice, Mw 5.9 Ussita and Mw 6.5 Norcia earthquakes and their ML >4 aftershocks, in total 38 events. The Mw 6.0 Amatrice earthquake presents the highest apparent stress, and the observed differences among the three larger shocks highlight the dynamic heterogeneity with which large earthquakes can occur in central Italy. Differences between MLe and Mw measures allows to identify events characterized by a higher amount of energy transferred to seismic waves, providing important constraints for the real-time evaluation of an earthquake shaking potential.

  11. Predictability of population displacement after the 2010 Haiti earthquake

    PubMed Central

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-01-01

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people’s movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people’s movements would have become less predictable. Instead, the predictability of people’s trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought. PMID:22711804

  12. Predictability of population displacement after the 2010 Haiti earthquake.

    PubMed

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-07-17

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people's movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people's movements would have become less predictable. Instead, the predictability of people's trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought.

  13. Comment on Pisarenko et al., "Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory"

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2016-02-01

    In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.

  14. Magnitudes and locations of the 1811-1812 New Madrid, Missouri, and the 1886 Charleston, South Carolina, earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Hopper, M.G.

    2004-01-01

    We estimate locations and moment magnitudes M and their uncertainties for the three largest events in the 1811-1812 sequence near New Madrid, Missouri, and for the 1 September 1886 event near Charleston, South Carolina. The intensity magnitude M1, our preferred estimate of M, is 7.6 for the 16 December 1811 event that occurred in the New Madrid seismic zone (NMSZ) on the Bootheel lineament or on the Blytheville seismic zone. M1, is 7.5 for the 23 January 1812 event for a location on the New Madrid north zone of the NMSZ and 7.8 for the 7 February 1812 event that occurred on the Reelfoot blind thrust of the NMSZ. Our preferred locations for these events are located on those NMSZ segments preferred by Johnston and Schweig (1996). Our estimates of M are 0.1-0.4 M units less than those of Johnston (1996b) and 0.3-0.5 M units greater than those of Hough et al. (2000). M1 is 6.9 for the 1 September 1886 event for a location at the Summerville-Middleton Place cluster of recent small earthquakes located about 30 km northwest of Charleston.

  15. Implications for prediction and hazard assessment from the 2004 Parkfield earthquake.

    PubMed

    Bakun, W H; Aagaard, B; Dost, B; Ellsworth, W L; Hardebeck, J L; Harris, R A; Ji, C; Johnston, M J S; Langbein, J; Lienkaemper, J J; Michael, A J; Murray, J R; Nadeau, R M; Reasenberg, P A; Reichle, M S; Roeloffs, E A; Shakal, A; Simpson, R W; Waldhauser, F

    2005-10-13

    Obtaining high-quality measurements close to a large earthquake is not easy: one has to be in the right place at the right time with the right instruments. Such a convergence happened, for the first time, when the 28 September 2004 Parkfield, California, earthquake occurred on the San Andreas fault in the middle of a dense network of instruments designed to record it. The resulting data reveal aspects of the earthquake process never before seen. Here we show what these data, when combined with data from earlier Parkfield earthquakes, tell us about earthquake physics and earthquake prediction. The 2004 Parkfield earthquake, with its lack of obvious precursors, demonstrates that reliable short-term earthquake prediction still is not achievable. To reduce the societal impact of earthquakes now, we should focus on developing the next generation of models that can provide better predictions of the strength and location of damaging ground shaking.

  16. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Labor Market Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of economic Super Sectors in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each Super Sector to each Instrumental Intensity level. The analysis concerns the direct effect of the scenario earthquake on economic sectors and provides a baseline for the indirect and interactive analysis of an input-output model of the regional economy. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by the North American Industry Classification System. According to the analysis results, nearly 225,000 business

  17. Evidence for the recurrence of large-magnitude earthquakes along the Makran coast of Iran and Pakistan

    USGS Publications Warehouse

    Page, W.D.; Alt, J.N.; Cluff, L.S.; Plafker, G.

    1979-01-01

    The presence of raised beaches and marine terraces along the Makran coast indicates episodic uplift of the continental margin resulting from large-magnitude earthquakes. The uplift occurs as incremental steps similar in height to the 1-3 m of measured uplift resulting from the November 28, 1945 (M 8.3) earthquake at Pasni and Ormara, Pakistan. The data support an E-W-trending, active subduction zone off the Makran coast. The raised beaches and wave-cut terraces along the Makran coast are extensive with some terraces 1-2 km wide, 10-15 m long and up to 500 m in elevation. The terraces are generally capped with shelly sandstones 0.5-5 m thick. Wave-cut cliffs, notches, and associated boulder breccia and swash troughs are locally preserved. Raised Holocene accretion beaches, lagoonal deposits, and tombolos are found up to 10 m in elevation. The number and elevation of raised wave-cut terraces along the Makran coast increase eastward from one at Jask, the entrance to the Persian Gulf, at a few meters elevation, to nine at Konarak, 250 km to the east. Multiple terraces are found on the prominent headlands as far east as Karachi. The wave-cut terraces are locally tilted and cut by faults with a few meters of displacement. Long-term, average rates of uplift were calculated from present elevation, estimated elevation at time of deposition, and 14C and U-Th dates obtained on shells. Uplift rates in centimeters per year at various locations from west to east are as follows: Jask, 0 (post-Sangamon); Konarak, 0.031-0.2 (Holocene), 0.01 (post-Sangamon); Ormara 0.2 (Holocene). ?? 1979.

  18. 75 FR 63854 - National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... completeness and scientific validity of the available data related to earthquake predictions, and on related... focus on: (1) Methods for rapidly estimating the probability of a large earthquake following a possible...

  19. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    NASA Astrophysics Data System (ADS)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  20. Evolving magnitude-frequency distributions during the Guy-Greenbrier (2010-11) induced earthquake sequence: Insights into the physical mechanisms of b-value shifts and large-magnitude curvature

    NASA Astrophysics Data System (ADS)

    Dempsey, D.; Suckale, J.; Huang, Y.

    2015-12-01

    In 2010-11, a sequence of earthquakes occurred on an unmapped basement fault near Guy, Arkansas. The events are likely to have been triggered by a nine month period of wastewater disposal during which 4.5x105 m2 of water was injected at two nearby wells. Magnitude-frequency distributions (MFD) for the induced sequence show two interesting properties: (i) a low Gutenberg-Richter (GR) b-value of ~0.8 during injection, increasing to 1.0 post-injection (ii) and downward curvature of the MFD at the upper magnitude limit. We use a coupled model of injection-triggering and earthquake rupture to show how the evolving MFD can be understood in terms of an effective stress increase on the fault, which arises from overpressuring and strength reduction. Reservoir simulation is used to model injection into a horizontally extensive aquifer that overlies an impermeable basement containing a single permeable fault. Earthquake triggering occurs when the static strength, reduced by the modeled pressure increase, satisfies a Mohr-Coulomb criterion. Pressure evolution is also incorporated in a model of fault rupture, which is based on an expanding bilateral crack approximation to quasidynamic rupture propagation and static/dynamic friction evolution. An earthquake sequence is constructed as an ensemble of triggered ruptures for many realizations of a heterogeneous fractal stress distribution. During injection, there is a steady rise in fluid pressure on the fault. In addition to its role in triggering earthquakes, rising pressure affects the rupture process by reducing the dynamic strength relative to fault shear stress; this is equivalent to tectonic stress increase in natural seismicity. As mean stress increases, larger events are more frequent and this is reflected in a lower b-value. The largest events, however, occur late in the loading cycle at very high stress; their absence in the early stages of injection manifests as downward curvature in the MFD at large magnitudes.

  1. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  2. Foreshocks and earthquake prediction: recent results from Greece experience

    NASA Astrophysics Data System (ADS)

    Papadopoulos, G. A.; Daskalaki, E.; Minadakis, G.; Orfanogiannaki, K.

    2009-04-01

    Foreshock activity has been proposed since 60's as one of the most potential tools for the short-term prediction of the mainshock. However, the usually low earthquake detectability of the seismic monitoring systems makes it difficult to identify significant foreshock seismicity patterns in near real-time conditions. The gradual improvement of the monitoring systems in the last years makes it possible to detect more reliably the precursory nature of the foreshock activity. This is exactly the case of Greece which is characterized by the highest seismicity in the western Eurasia. We use data from the routine Greek seismicity catalogue of the time interval 1985-2008 and identify a posteriori foreshock activity occurring before strong earthquakes of Ms ≥ 5.5. The criteria to identify significant foreshock activity includes the next: time window up to 1 year before the strong earthquake, space window no more that 50 km from the epicenter of the strong earthquake, increase of the seismicity rate in the particular space-time window at a significance level of at least 95% with respect to the background seismicity rate in the same area. The results indicate that at least of about 50% of the strong earthquakes were preceded by significant foreshock activity. However, further examination of the records in particular seismograph stations of the national Greek seismograph system showed that foreshock activity is not always evident in the routine seismicity catalogue because of reasons related to the detection capabilities of the system. We propose the systematic, automatic monitoring of the daily seismicity with the purpose to identify in near real-time foreshock activity. We demonstrate the algorithm FORMA which is designed to perform such an automatic detection.

  3. A theoretical study of correlation between scaled energy and earthquake magnitude based on two source displacement models

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2013-12-01

    The correlation of the scaled energy, ê = E s/ M 0, versus earthquake magnitude, M s, is studied based on two models: (1) Model 1 based on the use of the time function of the average displacements, with a ω -2 source spectrum, across a fault plane; and (2) Model 2 based on the use of the time function of the average displacements, with a ω -3 source spectrum, across a fault plane. For the second model, there are two cases: (a) As τ ≒ T, where τ is the rise time and T the rupture time, lg( ê) ~ - M s; and (b) As τ ≪ T, lg( ê) ~ -(1/2) M s. The second model leads to a negative value of ê. This means that Model 2 cannot work for studying the present problem. The results obtained from Model 1 suggest that the source model is a factor, yet not a unique one, in controlling the correlation of ê versus M s.

  4. Spatial variations in the frequency-magnitude distribution of earthquakes at Soufriere Hills Volcano, Montserrat, West Indies

    USGS Publications Warehouse

    Power, J.A.; Wyss, M.; Latchman, J.L.

    1998-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is determined as a function of space beneath Soufriere Hills Volcano, Montserrat, from data recorded between August 1, 1995 and March 31, 1996. A volume of anomalously high b-values (b > 3.0) with a 1.5 km radius is imaged at depths of 0 and 1.5 km beneath English's Crater and Chance's Peak. This high b-value anomaly extends southwest to Gage's Soufriere. At depths greater than 2.5 km volumes of comparatively low b-values (b-1) are found beneath St. George's Hill, Windy Hill, and below 2.5 km depth and to the south of English's Crater. We speculate the depth of high b-value anomalies under volcanoes may be a function of silica content, modified by some additional factors, with the most siliceous having these volumes that are highly fractured or contain high pore pressure at the shallowest depths. Copyright 1998 by the American Geophysical Union.

  5. Simulation of Parallel Interacting Faults and Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Mora, P.; Weatherley, D.; Klein, B.

    2003-04-01

    Numerical shear experiments of a granular region using the lattice solid model often exhibit accelerating energy release in the lead-up to large events (Mora et al, 2000) and a growth in correlation lengths in the stress field (Mora and Place, 2002). While these results provide evidence for a Critical Point-like mechanism in elasto-dynamic systems and the possibility of earthquake forecasting but they do not prove such a mechanism occurs in the crust. Cellular automaton models simulations exhibit accelerating energy release prior to large events or unpredictable behaviour in which large events may occur at any time depending on tuning parameters such as dissipation ratio and stress transfer ratio (Weatherley and Mora, 2003). The mean stress plots from the particle simulations are most similar to the CA mean stress plots near the boundary of the predictable and unpredictable regimes suggesting that elasto-dynamic systems may be close to the borderline of predictable and unpredictable. To progress in resolving the question of whether more realistic fault system models exhibit predictable behaviour and to determine whether they also have an unpredictable and predictable regime depending on tuning parameters like that seen in CA simulations, we developed a 2D elasto-dynamic model of parallel interacting faults. The friction is slip weakening until a critical slip distance. Henceforth, the friction is at the dynamic value until the slip rate drops below the value it attained when the critical slip distance was exceeded. As the slip rate continues to drop, the friction increases back to the static value as a function of slip rate. Numerical shear experiments are conducted in a model with 41 parallel interacting faults. Calculations of the inverse metric defined in Klein et al (2000) indicate that the system is non-ergodic. Furthermore, by calculating the correllation between the stress fields at different times we determine that the system exhibits so called ``glassy

  6. Effects of magnitude and magnitude predictability of postural perturbations on preparatory cortical activity in older adults with and without Parkinson's disease.

    PubMed

    Smith, Beth A; Jacobs, Jesse V; Horak, Fay B

    2012-10-01

    The goal of this study was to identify whether impaired cortical preparation may relate to impaired scaling of postural responses of people with Parkinson's disease (PD). We hypothesized that impaired scaling of postural responses in participants with PD would be associated with impaired set-dependent cortical activity in preparation for perturbations of predictable magnitudes. Participants performed postural responses to backward surface translations. We examined the effects of perturbation magnitude (predictable small vs. predictable large) and predictability of magnitude (predictable vs. unpredictable-in-magnitude) on postural responses (center-of-pressure (CoP) displacements) and on preparatory electroencephalographic (EEG) measures of contingent negative variation (CNV) and alpha and beta event-related desynchronization (ERD). Our results showed that unpredictability of perturbation magnitude, but not the magnitude of the perturbation itself, was associated with increased CNV amplitude at the CZ electrode in both groups. While control participants scaled their postural responses to the predicted magnitude of the perturbation, their condition-related changes in CoP displacements were not correlated with condition-related changes in EEG preparatory activity (CNV or ERD). In contrast, participants with PD did not scale their postural responses to the predicted magnitude of the perturbation, but they did demonstrate greater beta ERD in the condition of predictably small-magnitude perturbations and greater beta ERD than the control participants at the CZ electrode. In addition, increased beta ERD in PD was associated with decreased adaptability of postural responses, suggesting that preparatory cortical activity may have a more direct influence on postural response scaling for people with PD than for control participants.

  7. Recent development of the earthquake strong motion-intensity catalog and intensity prediction equations for Iran

    NASA Astrophysics Data System (ADS)

    Zare, Mehdi

    2017-07-01

    This study aims to develop a new earthquake strong motion-intensity catalog as well as intensity prediction equations for Iran based on the available data. For this purpose, all the sites which had both recorded strong motion and intensity values throughout the region were first searched. Then, the data belonging to the 306 identified sites were processed, and the results were compiled as a new strong motion-intensity catalog. Based on this new catalog, two empirical equations between the values of intensity and the ground motion parameters (GMPs) for the Iranian earthquakes were calculated. At the first step, earthquake "intensity" was considered as a function of five independent GMPs including "Log (PHA)," "moment magnitude (MW)," "distance to epicenter," "site type," and "duration," and a multiple stepwise regression was calculated. Regarding the correlations between the parameters and the effectiveness coefficients of the predictors, the Log (PHA) was recognized as the most effective parameter on the earthquake "intensity," while the parameter "site type" was removed from the equations since it was determines as the least significant variable. Then, at the second step, a simple ordinary least squares (OLS) regression was fitted only between the parameters intensity and the Log (PHA) which resulted in more over/underestimated intensity values comparing to the results of the multiple intensity-GMPs regression. However, for rapid response purposes, the simple OLS regression may be more useful comparing to the multiple regression due to its data availability and simplicity. In addition, according to 50 selected earthquakes, an empirical relation between the macroseismic intensity (I0) and MW was developed.

  8. Recent development of the earthquake strong motion-intensity catalog and intensity prediction equations for Iran

    NASA Astrophysics Data System (ADS)

    Zare, Mehdi

    2016-12-01

    This study aims to develop a new earthquake strong motion-intensity catalog as well as intensity prediction equations for Iran based on the available data. For this purpose, all the sites which had both recorded strong motion and intensity values throughout the region were first searched. Then, the data belonging to the 306 identified sites were processed, and the results were compiled as a new strong motion-intensity catalog. Based on this new catalog, two empirical equations between the values of intensity and the ground motion parameters (GMPs) for the Iranian earthquakes were calculated. At the first step, earthquake "intensity" was considered as a function of five independent GMPs including "Log (PHA)," "moment magnitude (MW)," "distance to epicenter," "site type," and "duration," and a multiple stepwise regression was calculated. Regarding the correlations between the parameters and the effectiveness coefficients of the predictors, the Log (PHA) was recognized as the most effective parameter on the earthquake "intensity," while the parameter "site type" was removed from the equations since it was determines as the least significant variable. Then, at the second step, a simple ordinary least squares (OLS) regression was fitted only between the parameters intensity and the Log (PHA) which resulted in more over/underestimated intensity values comparing to the results of the multiple intensity-GMPs regression. However, for rapid response purposes, the simple OLS regression may be more useful comparing to the multiple regression due to its data availability and simplicity. In addition, according to 50 selected earthquakes, an empirical relation between the macroseismic intensity (I0) and MW was developed.

  9. Earthquake forecasting and warning

    SciTech Connect

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  10. Relationship between spatial variations in active creep and large magnitude hanging wall earthquakes associated with the Alto Tiberina low angle normal fault, central Italy.

    NASA Astrophysics Data System (ADS)

    Mencin, David; Bennett, Rick; Jackson, Lily J.; Casale, Gabriele

    2014-05-01

    The Alto Tiberina fault (ATF) in central Italy is a rare instance of a low angle normal fault that appears to be actively creeping at shallow to mid-crustal depths. While conventional Anderson earthquake mechanics dictate that these faults should lock up under extension, recent studies using GPS velocity data and simple fault models suggest that the ATF accommodates slip by aseismic creep below ~4 km depth in the latitude range of 43.2 N to 43.5 N. This creeping section of the ATF is well imaged and there are no instrumentally recorded large magnitude earthquakes in the hanging wall. There is no evidence for active fault creep north and south of the creeping section where large hanging wall earthquakes have occurred. We use geodetically determined images of fault creep and earthquake focal mechanisms data to explore the stress transfer relationships between the creeping section of the ATF and adjacent portions of the fault zone, which appear to be locked. In one interpretation, hanging wall earthquakes occur as a result of strain accumulation caused by variations in creep on the low angle normal fault. An alternative explanation is that creep on the low angle fault has been inhibited in the vicinity of the large magnitude hanging wall earthquakes. These spatial relationships notwithstanding, the resolution of the imaged pattern of creep is relatively low. A borehole strainmeter network would provide unprecedented temporal resolution of aseismic creep transients that would help to evaluate the possible relationships between hanging wall stress accumulation and stress sensitive creep. The combination of modeling and seismic/geodetic monitoring with a new borehole strainmeter array would also help to decipher the fault structure, earthquake mechanisms, and seismic risk in a populated area.

  11. Possibility of Earthquake-prediction by analyzing VLF signals

    NASA Astrophysics Data System (ADS)

    Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta

    2016-07-01

    Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

  12. Ground motion prediction and earthquake scenarios in the volcanic region of Mt. Etna (Southern Italy

    NASA Astrophysics Data System (ADS)

    Langer, Horst; Tusa, Giuseppina; Luciano, Scarfi; Azzaro, Raffaela

    2013-04-01

    One of the principal issues in the assessment of seismic hazard is the prediction of relevant ground motion parameters, e. g., peak ground acceleration, radiated seismic energy, response spectra, at some distance from the source. Here we first present ground motion prediction equations (GMPE) for horizontal components for the area of Mt. Etna and adjacent zones. Our analysis is based on 4878 three component seismograms related to 129 seismic events with local magnitudes ranging from 3.0 to 4.8, hypocentral distances up to 200 km, and focal depth shallower than 30 km. Accounting for the specific seismotectonic and geological conditions of the considered area we have divided our data set into three sub-groups: (i) Shallow Mt. Etna Events (SEE), i.e., typically volcano-tectonic events in the area of Mt. Etna having a focal depth less than 5 km; (ii) Deep Mt. Etna Events (DEE), i.e., events in the volcanic region, but with a depth greater than 5 km; (iii) Extra Mt. Etna Events (EEE), i.e., purely tectonic events falling outside the area of Mt. Etna. The predicted PGAs for the SEE are lower than those predicted for the DEE and the EEE, reflecting their lower high-frequency energy content. We explain this observation as due to the lower stress drops. The attenuation relationships are compared to the ones most commonly used, such as by Sabetta and Pugliese (1987)for Italy, or Ambraseys et al. (1996) for Europe. Whereas our GMPEs are based on small earthquakes, the magnitudes covered by the two above mentioned attenuation relationships regard moderate to large magnitudes (up to 6.8 and 7.9, respectively). We show that the extrapolation of our GMPEs to magnitues beyond the range covered by the data is misleading; at the same time also the afore mentioned relationships fail to predict ground motion parameters for our data set. Despite of these discrepancies, we can exploit our data for setting up scenarios for strong earthquakes for which no instrumental recordings are

  13. The 26 May 2006 magnitude 6.4 Yogyakarta earthquake south of Mt. Merapi volcano: Did lahar deposits amplify ground shaking and thus lead to the disaster?

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Wang, R.; Luehr, B.-G.; Wassermann, J.; Behr, Y.; Parolai, S.; Anggraini, A.; Günther, E.; Sobiesiak, M.; Grosser, H.; Wetzel, H.-U.; Milkereit, C.; Sri Brotopuspito, P. J. K.; Harjadi, P.; Zschau, J.

    2008-05-01

    Indonesia is repeatedly unsettled by severe volcano- and earthquake-related disasters, which are geologically coupled to the 5-7 cm/a tectonic convergence of the Australian plate beneath the Sunda Plate. On Saturday, 26 May 2006, the southern coast of central Java was struck by an earthquake at 2254 UTC in the Sultanate Yogyakarta. Although the magnitude reached only M w = 6.4, it left more than 6,000 fatalities and up to 1,000,000 homeless. The main disaster area was south of Mt. Merapi Volcano, located within a narrow topographic and structural depression along the Opak River. The earthquake disaster area within the depression is underlain by thick volcaniclastic deposits commonly derived in the form of lahars from Mt. Merapi Volcano, which had a major influence leading to the disaster. In order to more precisely understand this earthquake and its consequences, a 3-month aftershock measurement campaign was performed from May to August 2006. We here present the first location results, which suggest that the Yogyakarta earthquake occurred at 10-20 km distance east of the disaster area, outside of the topographic depression. Using simple model calculations taking material heterogeneity into account we illustrate how soft volcaniclastic deposits may locally amplify ground shaking at distance. As the high degree of observed damage may have been augmented by the seismic response of the volcaniclastic Mt. Merapi deposits, this work implies that the volcano had an indirect effect on the level of earthquake destruction.

  14. Large magnitude (M > 7.5) offshore earthquakes in 2012: few examples of absent or little tsunamigenesis, with implications for tsunami early warning

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano

    2013-04-01

    We take into account some examples of offshore earthquakes occurred worldwide in year 2012 that were characterised by a "large" magnitude (Mw equal or larger than 7.5) but which produced no or little tsunami effects. Here, "little" is intended as "lower than expected on the basis of the parent earthquake magnitude". The examples we analyse include three earthquakes occurred along the Pacific coasts of Central America (20 March, Mw=7.8, Mexico; 5 September, Mw=7.6, Costa Rica; 7 November, Mw=7.5, Mexico), the Mw=7.6 and Mw=7.7 earthquakes occurred respectively on 31 August and 28 October offshore Philippines and offshore Alaska, and the two Indian Ocean earthquakes registered on a single day (11 April) and characterised by Mw=8.6 and Mw=8.2. For each event, we try to face the problem related to its tsunamigenic potential from two different perspectives. The first can be considered purely scientific and coincides with the question: why was the ensuing tsunami so weak? The answer can be related partly to the particular tectonic setting in the source area, partly to the particular position of the source with respect to the coastline, and finally to the focal mechanism of the earthquake and to the slip distribution on the ruptured fault. The first two pieces of information are available soon after the earthquake occurrence, while the third requires time periods in the order of tens of minutes. The second perspective is more "operational" and coincides with the tsunami early warning perspective, for which the question is: will the earthquake generate a significant tsunami and if so, where will it strike? The Indian Ocean events of 11 April 2012 are perfect examples of the fact that the information on the earthquake magnitude and position alone may not be sufficient to produce reliable tsunami warnings. We emphasise that it is of utmost importance that the focal mechanism determination is obtained in the future much more quickly than it is at present and that this

  15. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  16. Calibration of the landsliding numerical model SLIPOS and prediction of the seismically induced erosion for several large earthquakes scenarios

    NASA Astrophysics Data System (ADS)

    Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark

    2016-04-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.

  17. Large-magnitude, late Holocene earthquakes on the Genoa fault, West-Central Nevada and Eastern California

    USGS Publications Warehouse

    Ramelli, A.R.; Bell, J.W.; DePolo, C.M.; Yount, J.C.

    1999-01-01

    The Genoa fault, a principal normal fault of the transition zone between the Basin and Range Province and the northern Sierra Nevada, displays a large and conspicuous prehistoric scarp. Three trenches excavated across this scarp exposed two large-displacement, late Holocene events. Two of the trenches contained multiple layers of stratified charcoal, yielding radiocarbon ages suggesting the most recent and penultimate events on the main part of the fault occurred 500-600 cal B.P., and 2000-2200 cal B.P., respectively. Normal-slip offsets of 3-5.5 m per event along much of the rupture length are comparable to the largest historical Basin and Range Province earthquakes, suggesting these paleoearthquakes were on the order of magnitude 7.2-7.5. The apparent late Holocene slip rate (2-3 mm/yr) is one of the highest in the Basin and Range Province. Based on structural and behavioral differences, the Genoa fault is here divided into four principal sections (the Sierra, Diamond Valley, Carson Valley, and Jacks Valley sections) and is distinguished from three northeast-striking faults in the Carson City area (the Kings Canyon, Carson City, and Indian Hill faults). The conspicuous scarp extends for nearly 25 km, the combined length of the Carson Valley and Jacks Valley sections. The Diamond Valley section lacks the conspicuous scarp, and older alluvial fans and bedrock outcrops on the downthrown side of the fault indicate a lower activity rate. Activity further decreases to the south along the Sierra section, which consists of numerous distributed faults. All three northeast-striking faults in the Carson City area ruptured within the past few thousand years, and one or more may have ruptured during recent events on the Genoa fault.

  18. Raising the science awareness of first year undergraduate students via an earthquake prediction seminar

    NASA Astrophysics Data System (ADS)

    Gilstrap, T. D.

    2011-12-01

    The public is fascinated with and fearful of natural hazards such as earthquakes. After every major earthquake there is a surge of interest in earthquake science and earthquake prediction. Yet many people do not understand the challenges of earthquake prediction and the need to fund earthquake research. An earthquake prediction seminar is offered to first year undergraduate students to improve their understanding of why earthquakes happen, how earthquake research is done and more specifically why it is so challenging to issue short-term earthquake prediction. Some of these students may become scientists but most will not. For the majority this is an opportunity to learn how science research works and how it is related to policy and society. The seminar is seven weeks long, two hours per week and has been taught every year for the last four years. The material is presented conceptually; there is very little quantitative work involved. The class starts with a field trip to the Randolph College Seismic Station where students learn about seismographs and the different types of seismic waves. Students are then provided with basic background on earthquakes. They learn how to pick arrival times using real seismograms, how to use earthquake catalogues, how to predict the arrival of an earthquake wave at any location on Earth. Next they learn about long, intermediate, short and real time earthquake prediction. Discussions are an essential part of the seminar. Students are challenged to draw their own conclusions on the pros and cons of earthquake prediction. Time is designated to discuss the political and economic impact of earthquake prediction. At the end of the seven weeks students are required to write a paper and discuss the need for earthquake prediction. The class is not focused on the science but rather the links between the science issues and their economical and political impact. Weekly homework assignments are used to aid and assess students' learning. Pre and

  19. Micro-seismicity in the Gulf of Cadiz: Is there a link between micro-seismicity, high magnitude earthquakes and active faults?

    NASA Astrophysics Data System (ADS)

    Silva, Sónia; Terrinha, Pedro; Matias, Luis; Duarte, João C.; Roque, Cristina; Ranero, César R.; Geissler, Wolfram H.; Zitellini, Nevio

    2017-10-01

    The Gulf of Cadiz seismicity is characterized by persistent low to intermediate magnitude earthquakes, occasionally punctuated by high magnitude events such as the M 8.7 1755 Great Lisbon earthquake and the M = 7.9 event of February 28th, 1969. Micro-seismicity was recorded during 11 months by a temporary network of 25 ocean bottom seismometers (OBSs) in an area of high seismic activity, encompassing the potential source areas of the mentioned large magnitude earthquakes. We combined micro-seismicity analysis with processing and interpretation of deep crustal seismic reflection profiles and available refraction data to investigate the possible tectonic control of the seismicity in the Gulf of Cadiz area. Three controlling mechanisms are explored: i) active tectonic structures, ii) transitions between different lithospheric domains and inherited Mesozoic structures, and iii) fault weakening mechanisms. Our results show that micro-seismicity is mostly located in the upper mantle and is associated with tectonic inversion of extensional rift structures and to the transition between different lithospheric/rheological domains. Even though the crustal structure is well imaged in the seismic profiles and in the bathymetry, crustal faults show low to negligible seismic activity. A possible explanation for this is that the crustal thrusts are thin-skinned structures rooting in relatively shallow sub-horizontal décollements associated with (aseismic) serpentinization levels at the top of the lithospheric mantle. Therefore, co-seismic slip along crustal thrusts may only occur during large magnitude events, while for most of the inter-seismic cycle these thrusts remain locked, or slip aseismically. We further speculate that high magnitude earthquake's ruptures may only nucleate in the lithospheric mantle and then propagate into the crust across the serpentinized layers.

  20. Study of Earthquake Disaster Prediction System of Langfang city Based on GIS

    NASA Astrophysics Data System (ADS)

    Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei

    2017-07-01

    In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

  1. Prediction model of earthquake with the identification of earthquake source polarity mechanism through the focal classification using ANFIS and PCA technique

    NASA Astrophysics Data System (ADS)

    Setyonegoro, W.

    2016-05-01

    Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.

  2. Diking-induced moderate-magnitude earthquakes on a youthful rift border fault: The 2002 Nyiragongo-Kalehe sequence, D.R. Congo

    NASA Astrophysics Data System (ADS)

    Wauthier, C.; Smets, B.; Keir, D.

    2015-12-01

    On 24 October 2002, Mw 6.2 earthquake occurred in the central part of the Lake Kivu basin, Western Branch of the East African Rift. This is the largest event recorded in the Lake Kivu area since 1900. An integrated analysis of radar interferometry (InSAR), seismic and geological data, demonstrates that the earthquake occurred due to normal-slip motion on a major preexisting east-dipping rift border fault. A Coulomb stress analysis suggests that diking events, such as the January 2002 dike intrusion, could promote faulting on the western border faults of the rift in the central part of Lake Kivu. We thus interpret that dike-induced stress changes can cause moderate to large-magnitude earthquakes on major border faults during continental rifting. Continental extension processes appear complex in the Lake Kivu basin, requiring the use of a hybrid model of strain accommodation and partitioning in the East African Rift.

  3. Network of seismo-geochemical monitoring observatories for earthquake prediction research in India

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Hirok; Barman, Chiranjib; Iyengar, A.; Ghose, Debasis; Sen, Prasanta; Sinha, Bikash

    2013-08-01

    Present paper deals with a brief review of the research carried out to develop multi-parametric gas-geochemical monitoring facilities dedicated to earthquake prediction research in India by installing a network of seismo-geochemical monitoring observatories at different regions of the country. In an attempt to detect earthquake precursors, the concentrations of helium, argon, nitrogen, methane, radon-222 (222Rn), polonium-218 (218Po), and polonium-214 (214Po) emanating from hydrothermal systems are monitored continuously and round the clock at these observatories. In this paper, we make a cross correlation study of a number of geochemical anomalies recorded at these observatories. With the data received from each of the above observatories we attempt to make a time series analysis to relate magnitude and epicentral distance locations through statistical methods, empirical formulations that relate the area of influence to earthquake scale. Application of the linear and nonlinear statistical techniques in the recorded geochemical data sets reveal a clear signature of long-range correlation in the data sets.

  4. The 1170 and 1202 CE Dead Sea Rift earthquakes and long-term magnitude distribution of the Dead Sea Fault zone

    USGS Publications Warehouse

    Hough, S.E.; Avni, R.

    2009-01-01

    In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.

  5. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    NASA Astrophysics Data System (ADS)

    Takeda, F.

    2004-12-01

    An earthquake (EQ) phenomenon has evidence of deterministic chaos. A total of about 17200 EQ's with magnitude M >= 4 are collected from a catalogue of Japan Meteorological Agency from 1983 to June of 2004 for a region of (24 deg - 48 deg N, 124 deg - 150 deg E). Sequencing the five EQ properties of every EQ, which are latitude (LA), longitude (LN), depth (DP), inter-EQ time interval (IT) and M, in EQ occurrence (event) order, we find that each of the five EQ property series is deterministic chaos.[1,2] For example, the largest Lyapunov exponents are all positive values, statistically distinct from those of the series surrogated by randomly shuffling the event order. Each property series in chronological event order i, is given by Da = [Da,1, Da,2, . , Da,i-1, Da,i, Da,i+1, .]. (1) Here index a stands for each property (a = LT, LN, DP, IT and M). Each minimum embedding dimension (ED) of Eq. 1 is only five (which happens to be our five EQ properties). It is estimated by finding the ED at which the percentage of false nearest neighbors drops to a constant residual level. The residue is indicative of the contamination level of dynamical noise created by many other seismogenic variables. To reduce the residue of each series to zero, we take the moving average (denoted by Da,m,w) of each property element of Eq. 1 over w consecutive events with respect to event i running from m - w + 1 to m. To extract the rate of change in Da,m,w, we use the second order difference at an event-separation s, defined as Aa,m,w,s = (Da,m-2s,w - 2 Da,m-s,w + Da,m,w). To extract the determinism in seismogenic evolutions (quiescence cycles) to only large EQ's, we use the moving sum (CI-w) of element Da,i (a=IT) over w consecutive events from m - w + 1 to m. With an appropriate choice of w (for example, w = 100), CI-w reveals the deterministic measures that are absent in its surrogate. Thus the analysis of deterministic Da,m,w, Aa,m,w,s and CI-w should enable us to predict the time, focus and M

  6. Individual differences in electrophysiological responses to performance feedback predict AB magnitude.

    PubMed

    MaClean, Mary H; Arnell, Karen M

    2013-06-01

    The attentional blink (AB) is observed when report accuracy for a second target (T2) is reduced if T2 is presented within approximately 500 ms of a first target (T1), but accuracy is relatively unimpaired at longer T1-T2 separations. The AB is thought to represent a transient cost of attending to a target, and reliable individual differences have been observed in its magnitude. Some models of the AB have suggested that cognitive control contributes to production of the AB, such that greater cognitive control is associated with larger AB magnitudes. Performance-monitoring functions are thought to modulate the strength of cognitive control, and those functions are indexed by event-related potentials in response to both endogenous and exogenous performance evaluation. Here we examined whether individual differences in the amplitudes to internal and external response feedback predict individual AB magnitudes. We found that electrophysiological responses to externally provided performance feedback, measured in two different tasks, did predict individual differences in AB magnitude, such that greater feedback-related N2 amplitudes were associated with larger AB magnitudes, regardless of the valence of the feedback.

  7. The Ordered Network Structure and Prediction Summary for M≥7 Earthquakes in Xinjiang Region of China

    NASA Astrophysics Data System (ADS)

    Men, Ke-Pei; Zhao, Kai

    2014-12-01

    M ≥7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a × k (k = 1,2,3), 11 ~ 12 a, 41 ~ 43 a, 18 ~ 19 a, and 5 ~ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019 - 2020 and 2025 - 2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  8. Brief Communication: On the source characteristics and impacts of the magnitude 7.2 Bohol earthquake, Philippines

    NASA Astrophysics Data System (ADS)

    Lagmay, A. M. F.; Eco, R.

    2014-10-01

    A devastating earthquake struck Bohol, Philippines, on 15 October 2013. The earthquake originated at 12 km depth from an unmapped reverse fault, which manifested on the surface for several kilometers and with maximum vertical displacement of 3 m. The earthquake resulted in 222 fatalities with damage to infrastructure estimated at USD 52.06 million. Widespread landslides and sinkholes formed in the predominantly limestone region during the earthquake. These remain a significant threat to communities as destabilized hillside slopes, landslide-dammed rivers and incipient sinkholes are still vulnerable to collapse, triggered possibly by aftershocks and heavy rains in the upcoming months of November and December. The most recent fatal temblor originated from a previously unmapped fault, herein referred to as the Inabanga Fault. Like the hidden or previously unmapped faults responsible for the 2012 Negros and 2013 Bohol earthquakes, there may be more unidentified faults that need to be mapped through field and geophysical methods. This is necessary to mitigate the possible damaging effects of future earthquakes in the Philippines.

  9. Update of the Graizer-Kalkan ground-motion prediction equations for shallow crustal continental earthquakes

    USGS Publications Warehouse

    Graizer, Vladimir; Kalkan, Erol

    2015-01-01

    A ground-motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration and 5-percent damped pseudo spectral acceleration response ordinates of maximum horizontal component of randomly oriented ground motions was developed by Graizer and Kalkan (2007, 2009) to be used for seismic hazard analyses and engineering applications. This GMPE was derived from the greatly expanded Next Generation of Attenuation (NGA)-West1 database. In this study, Graizer and Kalkan’s GMPE is revised to include (1) an anelastic attenuation term as a function of quality factor (Q0) in order to capture regional differences in large-distance attenuation and (2) a new frequency-dependent sedimentary-basin scaling term as a function of depth to the 1.5-km/s shear-wave velocity isosurface to improve ground-motion predictions for sites on deep sedimentary basins. The new model (GK15), developed to be simple, is applicable to the western United States and other regions with shallow continental crust in active tectonic environments and may be used for earthquakes with moment magnitudes 5.0–8.0, distances 0–250 km, average shear-wave velocities 200–1,300 m/s, and spectral periods 0.01–5 s. Directivity effects are not explicitly modeled but are included through the variability of the data. Our aleatory variability model captures inter-event variability, which decreases with magnitude and increases with distance. The mixed-effects residuals analysis shows that the GK15 reveals no trend with respect to the independent parameters. The GK15 is a significant improvement over Graizer and Kalkan (2007, 2009), and provides a demonstrable, reliable description of ground-motion amplitudes recorded from shallow crustal earthquakes in active tectonic regions over a wide range of magnitudes, distances, and site conditions.

  10. Stress anomaly accompanying the 1979 lytle creek earthquake: implications for earthquake prediction.

    PubMed

    Clark, B R

    1981-01-02

    An unusual stress transient was recorded 15 kilometers from the epicenter of the Lytle Creek earthquake in southern California. It was observed at the recording site as an increased shear stress parallel to the fault surface and with the proper sense of shear to have triggered the earthquake. The anomaly began 2 to 4 weeks before the earthquake and lasted for 3 months.

  11. Numerical shake prediction for Earthquake Early Warning: data assimilation, real-time shake-mapping, and simulation of wave propagation

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Aoki, S.

    2014-12-01

    In many methods of the present Earthquake Early Warning (EEW) systems, hypocenter and magnitude are determined quickly and then strengths of ground motions are predicted. The 2011 Tohoku Earthquake (MW9.0), however, revealed some technical issues with the conventional methods: under-prediction due to the large extent of the fault rupture, and over-prediction due to confusion of the system by multiple aftershocks occurred simultaneously. To address these issues, a new concept is proposed for EEW: applying data assimilation technique, present wavefield is estimated precisely in real time (real-time shake mapping) and then future wavefield is predicted time-evolutionally using physical process of seismic wave propagation. Information of hypocenter location and magnitude are not required, which is basically different from the conventional method. In the proposed method, data assimilation technique is applied to estimate the current spatial distribution of wavefield, in which not only actual observation but also anticipated wavefield predicted from one time-step before are used. Real-time application of the data assimilation technique enables us to estimate wavefield in real time, which corresponds to real-time shake mapping. Once present situation is estimated precisely, we go forward to the prediction of future situation using simulation of wave propagation. The proposed method is applied to the 2011 Tohoku Earthquake (MW9.0) and the 2004 Mid-Niigata earthquake (Mw6.7). Future wavefield is precisely predicted, and the prediction is improved with shortening the lead time: for example, the error of 10 s prediction is smaller than that of 20 s, and that of 5 s is much smaller. By introducing this method, it becomes possible to predict ground motion precisely even for cases of the large extent of fault rupture and the multiple simultaneous earthquakes. The proposed method is based on a simulation of physical process from the precisely estimated present condition. This

  12. Earthquake prediction research at the Seismological Laboratory, California Institute of Technology

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquake prediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below. 

  13. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    SciTech Connect

    Takeda, Fumihide; Takeo, Makoto

    2004-12-09

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area.

  14. Validation of a ground motion synthesis and prediction methodology for the 1988, M=6.0, Saguenay Earthquake

    SciTech Connect

    Hutchings, L.; Jarpe, S.; Kasameyer, P.; Foxall, W.

    1998-01-01

    We model the 1988, M=6.0, Saguenay earthquake. We utilize an approach that has been developed to predict strong ground motion. this approach involves developing a set of rupture scenarios based upon bounds on rupture parameters. rupture parameters include rupture geometry, hypocenter, rupture roughness, rupture velocity, healing velocity (rise times), slip distribution, asperity size and location, and slip vector. Scenario here refers to specific values of these parameters for an hypothesized earthquake. Synthetic strong ground motion are then generated for each rupture scenario. A sufficient number of scenarios are run to span the variability in strong ground motion due to the source uncertainties. By having a suite of rupture scenarios of hazardous earthquakes for a fixed magnitude and identifying the hazard to the site from the one standard deviation value of engineering parameters we have introduced a probabilistic component to the deterministic hazard calculation, For this study we developed bounds on rupture scenarios from previous research on this earthquake. The time history closest to the observed ground motion was selected as a model for the Saguenay earthquake.

  15. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  16. Earthquake Forecasting Methodology Catalogue - A collection and comparison of the state-of-the-art in earthquake forecasting and prediction methodologies

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2015-04-01

    Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

  17. Pipeline experiment co-located with USGS Parkfield earthquake prediction project

    SciTech Connect

    Isenberg, J.; Richardson, E.

    1995-12-31

    A field experiment to investigate the response of buried pipelines to lateral offsets and traveling waves has been operational since June, 1988 at the Owens` Pasture site near Parkfield, CA where the US Geological Survey has predicted a M6 earthquake. Although the predicted earthquake has not yet occurred, the 1989 Loma Prieta earthquake and 1992 M4.7 earthquake near Parkfield produced measurable response at the pipeline experiment. The present paper describes upgrades to the experiment which were introduced after Loma Prieta which performed successfully in the 1992 event.

  18. Homogenization and implementation of a 3D regional velocity model in Mexico for its application in moment tensor inversion of intermediate-magnitude earthquakes

    NASA Astrophysics Data System (ADS)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Caló, Marco

    2017-04-01

    Moment tensor inversions for intermediate and small earthquakes (M. < 4.5) are challenging as they principally excite relatively short period seismic waves that interact strongly with local heterogeneities. Incorporating detailed regional 3D velocity models permits obtaining realistic synthetic seismograms and recover the seismic source parameters these smaller events. Two 3D regional velocity models have recently been developed for Mexico, using surface waves and seismic noise tomography (Spica et al., 2016; Gaite et al., 2015), which could be used to model the waveforms of intermediate magnitud earthquakes in this region. Such models are parameterized as layered velocity profiles and for some of the profiles, the velocity difference between two layers are considerable. The "jump" in velocities between two layers is inconvenient for some methods and algorithms that calculate synthetic waveforms, in particular for the method that we are using, the spectral element method (SPECFEM3D GLOBE, Komatitsch y Tromp, 2000), when the mesh does not follow the layer boundaries. In order to make the velocity models more easily implementec in SPECFEM3D GLOBE it is neccesary to apply a homogenization algorithm (Capdeville et al., 2015) such that the (now anisotropic) layer velocities are smoothly varying with depth. In this work, we apply a homogenization algorithm to the regional velocity models in México for implementing them in SPECFEM3D GLOBE, calculate synthetic waveforms for intermediate-magnitude earthquakes in México and invert them for the seismic moment tensor.

  19. Improved instrumental magnitude prediction expected from version 2 of the NASA SKY2000 master star catalog

    NASA Technical Reports Server (NTRS)

    Sande, C. B.; Brasoveanu, D.;