A probabilistic neural network for earthquake magnitude prediction.
Adeli, Hojjat; Panakkat, Ashif
2009-09-01
A probabilistic neural network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0. PMID:19502005
Are Earthquake Magnitudes Clustered?
Davidsen, Joern; Green, Adam
2011-03-11
The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.
Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351
NASA Astrophysics Data System (ADS)
Carpenter, N. S.; Payne, S. J.; Schafer, A. L.
2011-12-01
We recognize a discrepancy in magnitudes estimated for several Basin and Range faults in the Intermountain Seismic Belt, U.S.A. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths, Lseg, where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements, D, along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M ~ Lseg, should equal M ~ D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating Lseg with surface rupture length, SRL. Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M ~ SRL relationship using Lseg as SRL leads to an underestimation of magnitude and the M ~ Lseg and M ~ D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude, Mw, and length, where length is Lseg instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw ~ Lseg results are strikingly consistent with Mw ~ D calculations using paleoearthquake data for
Neural network models for earthquake magnitude prediction using multiple seismicity indicators.
Panakkat, Ashif; Adeli, Hojjat
2007-02-01
Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region. PMID:17393560
Ma, Z.; Fu, Z.; Zhang, Y.; Wang, C.; Zhang, G.; Liu, D.
1989-01-01
Mainland China is situated at the eastern edge of the Eurasian seismic system and is the largest intra-continental region of shallow strong earthquakes in the world. Based on nine earthquakes with magnitudes ranging between 7.0 and 7.9, the book provides observational data and discusses successes and failures of earthquake prediction. Derived from individual earthquakes, observations of various phenomena and seismic activities occurring before and after earthquakes, led to the establishment of some general characteristics valid for earthquake prediction.
NASA Astrophysics Data System (ADS)
Meier, M. A.; Heaton, T. H.; Clinton, J. F.
2015-12-01
The feasibility of Earthquake Early Warning (EEW) applications has revived the discussion on whether earthquake rupture development follows deterministic principles or not. If it does, it may be possible to predict final earthquake magnitudes while the rupture is still developing. EEW magnitude estimation schemes, most of which are based on 3-4 seconds of near-source p-wave data, have been shown to work well for small to moderate size earthquakes. In this magnitude range, the used time window is larger than the source durations of the events. Whether the magnitude estimation schemes also work for events in which the source duration exceeds the estimation time window, however, remains debated. In our study we have compiled an extensive high-quality data set of near-source seismic recordings. We search for waveform features that could be diagnostic of final event magnitudes in a predictive sense. We find that the onsets of large (M7+) events are statistically indistinguishable from those of medium sized events (M5.5-M7). Significant differences arise only once the medium size events terminate. This observation suggests that EEW relevant magnitude estimates are largely observational, rather than predictive, and that whether a medium size event becomes a large one is not determined at the rupture onset. As a consequence, early magnitude estimates for large events are minimum estimates, a fact that has to be taken into account in EEW alert messaging and response design.
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1991-01-01
The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.
An Energy Rate Magnitude for Large Earthquakes
NASA Astrophysics Data System (ADS)
Newman, A. V.; Convers, J. A.
2008-12-01
The ability to rapidly assess the approximate size of very large and destructive earthquakes is important for early hazard mitigation from both strong shaking and potential tsunami generation. Using a methodology to rapidly determine earthquake energy and duration using teleseismic high-frequency energy, we develop an adaptation to approximate the magnitude of a very large earthquake before the full duration of rupture can be measured at available teleseismic stations. We utilize available vertical component data to analyze the high-frequency energy growth between 0.5 and 2 Hz, minimizing the effect of later arrivals that are mostly attenuated in this range. Because events smaller than M~6.5 occur rapidly, this method is most adequate for larger events, whose rupture duration exceeds ~20 seconds. Using a catalog of about 200 large and great earthquakes we compare the high-frequency energy rate (· Ehf) to the total broad- band energy (· Ebb) to find a relationship for: Log(· Ehf)/Log(Ebb)≍ 0.85. Hence, combining this relation to the broad-band energy magnitude (Me) [Choy and Boatwright, 1995], yields a new high-frequency energy rate magnitude: M· E=⅔ log10(· Ehf)/0.85-2.9. Such an empirical approach can thus be used to obtain a reasonable assessment of an event magnitude from the initial estimate of energy growth, even before the arrival of the full direct-P rupture signal. For large shallow events thus far examined, the M· E predicts the ultimate Me to within ±0.2 units of M. For fast rupturing deep earthquakes M· E overpredicts, while for slow-rupturing tsunami earthquakes M· E underpredicts Me likely due to material strength changes at the source rupture. We will report on the utility of this method in both research mode, and in real-time scenarios when data availability is limited. Because the high-frequency energy is clearly discernable in real-time, this result suggests that the growth of energy can be used as a good initial indicator of the
Influence of Time and Space Correlations on Earthquake Magnitude
Lippiello, E.; Arcangelis, L. de; Godano, C.
2008-01-25
A crucial point in the debate on the feasibility of earthquake predictions is the dependence of an earthquake magnitude from past seismicity. Indeed, while clustering in time and space is widely accepted, much more questionable is the existence of magnitude correlations. The standard approach generally assumes that magnitudes are independent and therefore in principle unpredictable. Here we show the existence of clustering in magnitude: earthquakes occur with higher probability close in time, space, and magnitude to previous events. More precisely, the next earthquake tends to have a magnitude similar but smaller than the previous one. A dynamical scaling relation between magnitude, time, and space distances reproduces the complex pattern of magnitude, spatial, and temporal correlations observed in experimental seismic catalogs.
The Magnitude and Energy of Large Earthquakes
NASA Astrophysics Data System (ADS)
Purcaru, G.
2003-12-01
Several magnitudes were introduced to quantify large earthquakes better and more comprehensive than Ms: Mw (moment magnitude; Kanamori, 1977), ME (strain energy magnitude; Purcaru and Berckhemer, 1978), Mt (tsunami magnitude; Abe, 1979), Mm (mantle magnitude; Okal and Talandier, 1985), Me (seismic energy magnitude; Choy and Boatwright, 1995). Although these magnitudes are still subject to different uncertainties, various kinds of earthquakes can now be better understood in terms or combinations of them. They can also be viewd as mappings of basic source parameters: seismic moment, strain energy, seismic energy, stress drop, under certain assumptions or constraints. We studied a set of about 90 large earthquakes (shallow and deeper) occurred in different tectonic regimes, with more reliable source parameters, and compared them in terms of the above magnitudes. We found large differences between the strain energy (mapped to ME) and seismic energy (mapped to Me), and between ME of events with about the same Mw. This confirms that no 1-to-1 correspondence exists between these magnitudes (Purcaru, 2002). One major cause of differences for "normal" earthquakes is the level of the stress drop over asperities which release and partition the strain energy. We quantify the energetic balance of earthquakes in terms of strain energy Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) using an extended Hamilton's principle. The earthquakes are thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental. The (scaled) strain energy equation we derived is: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, assuming complete stress drop, using the (static) stress drop variability, and that Est and Es are not in a 1-to-1 correspondence. With all uncertainties, our analysis reveal, for a given seismic moment, a large variation of earthquakes in terms of energies, even in the same seismic region. In view of these, for further understanding
Testing earthquake predictions
NASA Astrophysics Data System (ADS)
Luen, Brad; Stark, Philip B.
2008-01-01
Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify 'chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions - or any other test statistic - is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000-2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions - a common "nonparametric" assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The P-value for the observed success rate is <0.001: The method successfully predicts about 5% of earthquakes, far better than 'chance' because the predictor exploits the clustering of earthquakes - occasional foreshocks - which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.
XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.
2003-08-17
As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure Interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.
XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.
2003-08-17
As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.
Strong motion duration and earthquake magnitude relationships
Salmon, M.W.; Short, S.A.; Kennedy, R.P.
1992-06-01
Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.
The magnitude distribution of dynamically triggered earthquakes
NASA Astrophysics Data System (ADS)
Hernandez, Stephen
Large dynamic strains carried by seismic waves are known to trigger seismicity far from their source region. It is unknown, however, whether surface waves trigger only small earthquakes, or whether they can also trigger large, societally significant earthquakes. To address this question, we use a mixing model approach in which total seismicity is decomposed into 2 broad subclasses: "triggered" events initiated or advanced by far-field dynamic strains, and "untriggered" spontaneous events consisting of everything else. The b-value of a mixed data set, b MIX, is decomposed into a weighted sum of b-values of its constituent components, bT and bU. For populations of earthquakes subjected to dynamic strain, the fraction of earthquakes that are likely triggered, f T, is estimated via inter-event time ratios and used to invert for bT. The confidence bounds on b T are estimated by multiple inversions of bootstrap resamplings of bMIX and fT. For Californian seismicity, data are consistent with a single-parameter Gutenberg-Richter hypothesis governing the magnitudes of both triggered and untriggered earthquakes. Triggered earthquakes therefore seem just as likely to be societally significant as any other population of earthquakes.
Extreme Magnitude Earthquakes and their Economical Consequences
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.
2011-12-01
The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.
Precise Relative Earthquake Magnitudes from Cross Correlation
Cleveland, K. Michael; Ammon, Charles J.
2015-04-21
We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.
Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.; Jackson, David D.
2016-04-01
We have obtained new results in the statistical analysis of global earthquake catalogs with special attention to the largest earthquakes, and we examined the statistical behavior of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the "Global Earthquake Activity Rate 1" model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 by 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalog from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a "General Earthquake Model" (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalog of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalog of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation principle that
Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.; Jackson, David D.
2016-07-01
We have obtained new results in the statistical analysis of global earthquake catalogues with special attention to the largest earthquakes, and we examined the statistical behaviour of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the `Global Earthquake Activity Rate 1' model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 × 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalogue from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a `General Earthquake Model' (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalogue of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalogue of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation
NASA Astrophysics Data System (ADS)
Boyd, O. S.; Cramer, C. H.
2013-12-01
We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial
Earthquake prediction comes of age
Lindth, A. . Office of Earthquakes, Volcanoes, and Engineering)
1990-02-01
In the last decade, scientists have begun to estimate the long-term probability of major earthquakes along the San Andreas fault. In 1985, the U.S. Geological Survey (USGS) issued the first official U.S. government earthquake prediction, based on research along a heavily instrumented 25-kilometer section of the fault in sparsely populated central California. Known as the Parkfield segment, this section of the Sand Andreas had experienced its last big earthquake, a magnitude 6, in 1966. Estimated probabilities of major quakes along the entire San Andreas by a working group of California earthquake experts, using new geologic data and careful analysis of past earthquakes, are reported.
NASA Astrophysics Data System (ADS)
Bora, Dipok K.
2016-06-01
In this study, we aim to improve the scaling between the moment magnitude ( M W), local magnitude ( M L), and the duration magnitude ( M D) for 162 earthquakes in Shillong-Mikir plateau and its adjoining region of northeast India by extending the M W estimates to lower magnitude earthquakes using spectral analysis of P-waves from vertical component seismograms. The M W- M L and M W- M D relationships are determined by linear regression analysis. It is found that, M W values can be considered consistent with M L and M D, within 0.1 and 0.2 magnitude units respectively, in 90 % of the cases. The scaling relationships investigated comply well with similar relationships in other regions in the world and in other seismogenic areas in the northeast India region.
Local magnitude scale for earthquakes in Turkey
NASA Astrophysics Data System (ADS)
Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.
2016-06-01
Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.
Earthquakes: Predicting the unpredictable?
Hough, S.E.
2005-01-01
The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.
Induced earthquake magnitudes are as large as (statistically) expected
NASA Astrophysics Data System (ADS)
Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran
2016-06-01
A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.
A note on evaluating VAN earthquake predictions
NASA Astrophysics Data System (ADS)
Tselentis, G.-Akis; Melis, Nicos S.
The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.
The Earthquake Frequency-Magnitude Distribution Functional Shape
NASA Astrophysics Data System (ADS)
Mignan, A.
2012-04-01
Knowledge of the completeness magnitude Mc, magnitude above which all earthquakes are detected, is a prerequisite to most seismicity analyses. Although computation of Mc is done routinely, different techniques often result in different values. Since an incorrect estimate can lead to under-sampling or worse to an erroneous estimate of the parameters of the Gutenberg-Richter (G-R) law, a better assessment of the deviation from the G-R law and thus of the earthquake detectability is of paramount importance to correctly estimate Mc. This is especially true for refined mapping of seismicity parameters such as in earthquake forecast models. The capacity of a seismic network to detect small earthquakes can be evaluated by investigating the functional shape of the earthquake Frequency-Magnitude Distribution (FMD). The non-cumulative FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the number of events of magnitude m, exp(-βm) the G-R law and q(m) a probability function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature often observed in bulk FMDs. Recent results however show that this gradual curvature is potentially due to spatial heterogeneities in Mc, meaning that the functional shape of the elemental (local) FMD still has to be described. Based on preliminary observations, we propose an exponential detection function of the form q(m) = exp(κ(m-Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models (gradually curved and angular) are compared in Southern California and Nevada. We show that the angular shaped FMD model better describes the elemental FMD and that the sum of elemental FMDs with different Mc(x,y) leads to the gradually curved FMD at the regional scale. We show that the proposed model (1) provides more robust estimates of Mc, (2) better estimates local b-values, and (3) gives an insight into earthquake detectability properties by using seismicity as a proxy
The magnitude distribution of declustered earthquakes in Southern California
Knopoff, Leon
2000-01-01
The binned distribution densities of magnitudes in both the complete and the declustered catalogs of earthquakes in the Southern California region have two significantly different branches with crossover magnitude near M = 4.8. In the case of declustered earthquakes, the b-values on the two branches differ significantly from each other by a factor of about two. The absence of self-similarity across a broad range of magnitudes in the distribution of declustered earthquakes is an argument against the application of an assumption of scale-independence to models of main-shock earthquake occurrence, and in turn to the use of such models to justify the assertion that earthquakes are unpredictable. The presumption of scale-independence for complete local earthquake catalogs is attributable, not to a universal process of self-organization leading to future large earthquakes, but to the universality of the process that produces aftershocks, which dominate complete catalogs. PMID:11035770
Induced earthquake magnitudes are as large as (statistically) expected
NASA Astrophysics Data System (ADS)
van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.
2015-12-01
Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-06-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
ERIC Educational Resources Information Center
Roper, Paul J.; Roper, Jere Gerard
1974-01-01
Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)
Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes
NASA Astrophysics Data System (ADS)
Hill-Butler, Charley; Blackett, Matthew; Wright, Robert
2015-04-01
There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable
NASA Astrophysics Data System (ADS)
Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John
2010-10-01
Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.
Intermediate-term earthquake prediction.
Keilis-Borok, V I
1996-01-01
An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems. Images Fig. 1 Fig. 2 Fig. 4 Fig. 5 PMID:11607660
An empirical evolutionary magnitude estimation for earthquake early warning
NASA Astrophysics Data System (ADS)
Wu, Yih-Min; Chen, Da-Yi
2016-04-01
For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.
Scoring annual earthquake predictions in China
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Jiang, Changsheng
2012-02-01
The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.
Multiscale mapping of completeness magnitude of earthquake catalogs
NASA Astrophysics Data System (ADS)
Vorobieva, Inessa; Narteau, Clement; Shebalin, Peter; Beauducel, François; Nercessian, Alexandre; Clouard, Valérie; Bouin, Marie-Paule
2013-04-01
We propose a multiscale method to map spatial variations in completeness magnitude Mc of earthquake catalogs. The Mc value may significantly vary in space due to the change of the seismic network density. Here we suggest a way to use only earthquake catalogs to separate small areas of higher network density (lower Mc) and larger areas of smaller network density (higher Mc). We reduce the analysis of the FMDs to the limited magnitude ranges, thus allowing deviation of the FMD from the log-linearity outside the range. We associate ranges of larger magnitudes with increasing areas for data selection based on constant in average number of completely recorded earthquakes. Then, for each point in space, we document the earthquake frequency-magnitude distribution at all length scales within the corresponding earthquake magnitude ranges. High resolution of the Mc-value is achieved through the determination of the smallest space-magnitude scale in which the Gutenberg-Richter law (i. e. an exponential decay) is verified. The multiscale procedure isolates the magnitude range that meets the best local seismicity and local record capacity. Using artificial catalogs and earthquake catalogs of the Lesser Antilles arc, this Mc mapping method is shown to be efficient in regions with mixed types of seismicity, a variable density of epicenters and various levels of registration.
Vallée, Martin
2013-01-01
The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes. PMID:24126256
On Earthquake Prediction in Japan
UYEDA, Seiya
2013-01-01
Japan’s National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204
On earthquake prediction in Japan.
Uyeda, Seiya
2013-01-01
Japan's National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author's view, are mainly interested in securing funds for seismology - on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204
Correlating precursory declines in groundwater radon with earthquake magnitude.
Kuo, T
2014-01-01
Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible. PMID:23550908
Earthquake Prediction is Coming
ERIC Educational Resources Information Center
MOSAIC, 1977
1977-01-01
Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)
On the macroseismic magnitudes of the largest Italian earthquakes
NASA Astrophysics Data System (ADS)
Tinti, S.; Vittori, T.; Mulargia, F.
1987-07-01
The macroseismic magnitudes MT of the largest Italian earthquakes ( I0 ⩾ VIII, MCS) have been computed by using the intensity magnitude relationships recently assessed by the authors (1986) for the Italian region. The Progetto Finalizzato Geodinamica (PFG) catalog of the Italian earthquakes, covering the period 1000-1980 (Postpischl, 1985) is the source data base and is reproduced in the Appendix: here the estimated values of MT are given side by side with the catalog macroseismic magnitudes MK i.e. the magnitudes computed according to the Karnik laws (Karnik, 1969). The one-sigma errors Δ MT are also given for each earthquake. The basic aim of the paper is to provide a handy and useful tool to researchers involved in seismicity and seismic-risk studies on Italian territory.
Magnitude-frequency distribution of volcanic explosion earthquakes
NASA Astrophysics Data System (ADS)
Nishimura, Takeshi; Iguchi, Masato; Hendrasto, Mohammad; Aoyama, Hiroshi; Yamada, Taishi; Ripepe, Maurizio; Genco, Riccardo
2016-07-01
Magnitude-frequency distributions of volcanic explosion earthquakes that are associated with occurrences of vulcanian and strombolian eruptions, or gas burst activity, are examined at six active volcanoes. The magnitude-frequency distribution at Suwanosejima volcano, Japan, shows a power-law distribution, which implies self-similarity in the system, as is often observed in statistical characteristics of tectonic and volcanic earthquakes. On the other hand, the magnitude-frequency distributions at five other volcanoes, Sakurajima and Tokachi-dake in Japan, Semeru and Lokon in Indonesia, and Stromboli in Italy, are well explained by exponential distributions. The statistical features are considered to reflect source size, as characterized by a volcanic conduit or chamber. Earthquake generation processes associated with vulcanian, strombolian and gas burst events are different from those of eruptions ejecting large amounts of pyroclasts, since the magnitude-frequency distribution of the volcanic explosivity index is generally explained by the power law.
Physics-based estimates of maximum magnitude of induced earthquakes
NASA Astrophysics Data System (ADS)
Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin
2016-04-01
In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.
Estimation of the magnitudes and epicenters of Philippine historical earthquakes
NASA Astrophysics Data System (ADS)
Bautista, Maria Leonila P.; Oike, Kazuo
2000-02-01
The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of
Magnitude 8.1 Earthquake off the Solomon Islands
NASA Technical Reports Server (NTRS)
2007-01-01
On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.
Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)
NASA Astrophysics Data System (ADS)
Satake, K.
2013-12-01
earthquakes. The Nuclear Regulation Authority, established in 2012, makes independent decisions based on the latest scientific knowledge. They assigned maximum credible earthquake magnitude of 9.6 for Nankai an Ryukyu troughs, 9.6 for Kuirl-Japan trench, and 9.2 for Izu-Bonin trench.
The parkfield, california, earthquake prediction experiment.
Bakun, W H; Lindh, A G
1985-08-16
Five moderate (magnitude 6) earthquakes with similar features have occurred on the Parkfield section of the San Andreas fault in central California since 1857. The next moderate Parkfield earthquake is expected to occur before 1993. The Parkfield prediction experiment is designed to monitor the details of the final stages of the earthquake preparation process; observations and reports of seismicity and aseismic slip associated with the last moderate Parkfield earthquake in 1966 constitute much of the basis of the design of the experiment. PMID:17739363
The Strain Energy, Seismic Moment and Magnitudes of Large Earthquakes
NASA Astrophysics Data System (ADS)
Purcaru, G.
2004-12-01
The strain energy Est, as potential energy, released by an earthquake and the seismic moment Mo are two fundamental physical earthquake parameters. The earthquake rupture process ``represents'' the release of the accumulated Est. The moment Mo, first obtained in 1966 by Aki, revolutioned the quantification of earthquake size and led to the elimination of the limitations of the conventional magnitudes (originally ML, Richter, 1930) mb, Ms, m, MGR. Both Mo and Est, not in a 1-to-1 correspondence, are uniform measures of the size, although Est is presently less accurate than Mo. Est is partitioned in seismic- (Es), fracture- (Eg) and frictional-energy Ef, and Ef is lost as frictional heat energy. The available Est = Es + Eg (Aki and Richards (1980), Kostrov and Das, (1988) for fundamentals on Mo and Est). Related to Mo, Est and Es, several modern magnitudes were defined under various assumptions: the moment magnitude Mw (Kanamori, 1977), strain energy magnitude ME (Purcaru and Berckhemer, 1978), tsunami magnitude Mt (Abe, 1979), mantle magnitude Mm (Okal and Talandier, 1987), seismic energy magnitude Me (Choy and Boatright, 1995, Yanovskaya et al, 1996), body-wave magnitude Mpw (Tsuboi et al, 1998). The available Est = (1/2μ )Δ σ Mo, Δ σ ~=~average stress drop, and ME is % \\[M_E = 2/3(\\log M_o + \\log(\\Delta\\sigma/\\mu)-12.1) ,\\] % and log Est = 11.8 + 1.5 ME. The estimation of Est was modified to include Mo, Δ and μ of predominant high slip zones (asperities) to account for multiple events (Purcaru, 1997): % \\[E_{st} = \\frac{1}{2} \\sum_i {\\frac{1}{\\mu_i} M_{o,i} \\Delta\\sigma_i} , \\sum_i M_{o,i} = M_o \\] % We derived the energy balance of Est, Es and Eg as: % \\[ E_{st}/M_o = (1+e(g,s)) E_s/M_o , e(g,s) = E_g/E_s \\] % We analyzed a set of about 90 large earthquakes and found that, depending on the goal these magnitudes quantify differently the rupture process, thus providing complementary means of earthquake characterization. Results for some
Prediction of earthquake response spectra
Joyner, W.B.; Boore, David M.
1982-01-01
We have developed empirical equations for predicting earthquake response spectra in terms of magnitude, distance, and site conditions, using a two-stage regression method similar to the one we used previously for peak horizontal acceleration and velocity. We analyzed horizontal pseudo-velocity response at 5 percent damping for 64 records of 12 shallow earthquakes in Western North America, including the recent Coyote Lake and Imperial Valley, California, earthquakes. We developed predictive equations for 12 different periods between 0.1 and 4.0 s, both for the larger of two horizontal components and for the random horizontal component. The resulting spectra show amplification at soil sites compared to rock sites for periods greater than or equal to 0.3 s, with maximum amplification exceeding a factor of 2 at 2.0 s. For periods less than 0.3 s there is slight deamplification at the soil sites. These results are generally consistent with those of several earlier studies. A particularly significant aspect of the predicted spectra is the change of shape with magnitude (confirming earlier results by McGuire and by Irifunac and Anderson). This result indicates that the conventional practice of scaling a constant spectral shape by peak acceleration will not give accurate answers. The Newmark and Hall method of spectral scaling, using both peak acceleration and peak velocity, largely avoids this error. Comparison of our spectra with the Nuclear Regulatory Commission's Regulatory Guide 1.60 spectrum anchored at the same value at 0.1 s shows that the Regulatory Guide 1.60 spectrum is exceeded at soil sites for a magnitude of 7.5 at all distances for periods greater than about 0.5 s. Comparison of our spectra for soil sites with the corresponding ATC-3 curve of lateral design force coefficient for the highest seismic zone indicates that the ATC-3 curve is exceeded within about 7 km of a magnitude 6.5 earthquake and within about 15 km of a magnitude 7.5 event. The amount by
Can we test for the maximum possible earthquake magnitude?
NASA Astrophysics Data System (ADS)
Holschneider, M.; Zöller, G.; Clements, R.; Schorlemmer, D.
2014-03-01
We explore the concept of maximum possible earthquake magnitude, M, in a region represented by an earthquake catalog from the viewpoint of statistical testing. For this aim, we assume that earthquake magnitudes are independent events that follow a doubly truncated Gutenberg-Richter distribution and focus on the upper truncation M. In earlier work, it has been shown that the value of M cannot be well constrained from earthquake catalogs alone. However, for two hypothesized values M and M', alternative statistical tests may address the question: Which value is more consistent with the data? In other words, is it possible to reject a magnitude within reasonable errors, i.e., the error of the first and the error of the second kind? The results for realistic settings indicate that either the error of the first kind or the error of the second kind is intolerably large. We conclude that it is essentially impossible to infer M in terms of alternative testing with sufficient confidence from an earthquake catalog alone, even in regions like Japan with excellent data availability. These findings are also valid for frequency-magnitude distributions with different tail behavior, e.g., exponential tapering. Finally, we emphasize that different data may only be useful to provide additional constraints for M, if they do not correlate with the earthquake catalog, i.e., if they have not been recorded in the same observational period. In particular, long-term geological assessments might be suitable to reduce the errors, while GPS measurements provide overall the same information as the catalogs.
Mark, Robert K.
1977-01-01
Correlation or linear regression estimates of earthquake magnitude from data on historical magnitude and length of surface rupture should be based upon the correct regression. For example, the regression of magnitude on the logarithm of the length of surface rupture L can be used to estimate magnitude, but the regression of log L on magnitude cannot. Regression estimates are most probable values, and estimates of maximum values require consideration of one-sided confidence limits.
Estimating earthquake magnitudes from reported intensities in the central and eastern United States
Boyd, Oliver; Cramer, Chris H.
2014-01-01
A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.
Celsi, R.; Wolfinbarger, M.; Wald, D.
2005-01-01
The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.
The Road to Convergence in Earthquake Frequency-Magnitude Statistics
NASA Astrophysics Data System (ADS)
Naylor, M.; Bell, A. F.; Main, I. G.
2013-12-01
The Gutenberg-Richter frequency-magnitude relation is a fundamental empirical law of seismology, but its form remains uncertain for rare extreme events. Convergence trends can be diagnostic of the nature of an underlying distribution and its sampling even before convergence has occurred. We examine the evolution of an information criteria metric applied to earthquake magnitude time series, in order to test whether the Gutenberg-Richter law can be rejecting in various earthquake catalogues. This would imply that the catalogue is starting to sample roll-off in the tail though it cannot yet identify the form of the roll-off. We compare bootstrapped synthetic Gutenberg-Richter and synthetic modified Gutenberg-Richter catalogues with the convergence trends observed in real earthquake data e.g. the global CMT catalogue, Southern California and mining/geothermal data. Whilst convergence in the tail remains some way off, we show that the temporal evolution of model likelihoods and parameters for the frequency-magnitude distribution of the global Harvard Centroid Moment Tensor catalogue is inconsistent with an unbounded GR relation, despite it being the preferred model at the current time. Bell, A. F., M. Naylor, and I. G. Main (2013), Convergence of the frequency-size distribution of global earthquakes, Geophys. Res. Lett., 40, 2585-2589, doi:10.1002/grl.50416.
Regional moment: Magnitude relations for earthquakes and explosions
Patton, H.J.; Walter, W.R. )
1993-02-19
The authors present M[sub o]:m[sub b] relations using m[sub b](P[sub n]) and m[sub b](L[sub g]) for earthquakes and explosions occurring in tectonic and stable areas. The observations for m[sub b](P[sub n]) range from about 3 to 6 and show excellent separation between earthquakes and explosions on M[sub o]:m[sub b] plots, independent of the magnitude. The scatter in M[sub o]:M[sub b] observations for NTS explosions is small compared to the earthquake data. The M[sub o]:m[sub b](L[sub g]) data for Soviet explosions overlay the observations for US explosions. These results, and the small scatter for NTS explosions, suggest weak dependence of M[sub o]:m[sub b] relations on emplacement media. A simple theoretical model is developed which matches all these observations. The model uses scaling similarity and conservation of energy to provide a physical link between seismic moment and a broadband seismic magnitude. Three factors, radiation pattern, material property, and apparent stress, contribute to the separation between earthquakes and explosions. This theoretical separation is independent of broadband magnitude. For US explosions in different media, the material property and apparent stress contributions are shown to compensate for one another, supporting the observations that M[sub o]:M[sub b] is nearly independent of source geology. 19 refs., 2 figs., 1 tab.
Hybrid Modelling of the Economical Consequences of Extreme Magnitude Earthquakes
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.
2013-05-01
A hybrid modelling methodology is proposed to estimate the probability of exceedance of the intensities of extreme magnitude earthquakes (PEI) and of their direct economical consequences (PEDEC). The hybrid modeling uses 3D seismic wave propagation (3DWP) combined with empirical Green function (EGF) and Neural Network (NN) techniques in order to estimate the seismic hazard (PEIs) of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. The methodology is validated for Mw 8 magnitude subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican Pacific Coast. The results obtained with the proposed methodology, such as those of the PEDECs in terms of the joint event "damage Cost (C) - maximum ground intensities", of the conditional return period of C given that the maximum intensity exceeds a certain value, could be used by decision makers to allocate funds or to implement policies, to mitigate the impact associated to the plausible occurrence of future extreme magnitude earthquakes.
Radiocarbon test of earthquake magnitude at the Cascadia subduction zone
Atwater, B.F.; Stuiver, M.; Yamaguchi, D.K.
1991-01-01
THE Cascadia subduction zone, which extends along the northern Pacific coast of North America, might produce earthquakes of magnitude 8 or 9 ('great' earthquakes) even though it has not done so during the past 200 years of European observation 1-7. Much of the evidence for past Cascadia earthquakes comes from former meadows and forests that became tidal mudflats owing to abrupt tectonic subsidence in the past 5,000 years2,3,6,7. If due to a great earthquake, such subsidence should have extended along more than 100 km of the coast2. Here we investigate the extent of coastal subsidence that might have been caused by a single earthquake, through high-precision radiocarbon dating of coastal trees that abruptly subsided into the intertidal zone. The ages leave the great-earthquake hypothesis intact by limiting to a few decades the discordance, if any, in the most recent subsidence of two areas 55 km apart along the Washington coast. This subsidence probably occurred about 300 years ago.
In Brief: China shaken by magnitude 7.9 earthquake
NASA Astrophysics Data System (ADS)
Showstack, Randy
2008-05-01
A magnitude 7.9 earthquake that struck the eastern Sichuan region of China on 12 May 2008 at 0628 UTC has caused more than 22,000 fatalities as of press time, and Chinese government officials have indiciated that this figure could increase to 50,000. The quake also caused severe damage including landslides and cracks to 391 mostly small dams, according to an Associated Press report that cited the Xinhua News Agency and CCTV news. China's Ministry of Water Resources has dispatched several work teams to quake-hit localities ``to prevent dams that were damaged by the earthquake from bursting and endangering the lives of residents,'' the ministry stated.
Earthquake magnitude calculation without saturation from the scaling of peak ground displacement
NASA Astrophysics Data System (ADS)
Melgar, Diego; Crowell, Brendan W.; Geng, Jianghui; Allen, Richard M.; Bock, Yehuda; Riquelme, Sebastian; Hill, Emma M.; Protti, Marino; Ganas, Athanassios
2015-07-01
GPS instruments are noninertial and directly measure displacements with respect to a global reference frame, while inertial sensors are affected by systematic offsets—primarily tilting—that adversely impact integration to displacement. We study the magnitude scaling properties of peak ground displacement (PGD) from high-rate GPS networks at near-source to regional distances (~10-1000 km), from earthquakes between Mw6 and 9. We conclude that real-time GPS seismic waveforms can be used to rapidly determine magnitude, typically within the first minute of rupture initiation and in many cases before the rupture is complete. While slower than earthquake early warning methods that rely on the first few seconds of P wave arrival, our approach does not suffer from the saturation effects experienced with seismic sensors at large magnitudes. Rapid magnitude estimation is useful for generating rapid earthquake source models, tsunami prediction, and ground motion studies that require accurate information on long-period displacements.
Multifractal detrended fluctuation analysis of Pannonian earthquake magnitude series
NASA Astrophysics Data System (ADS)
Telesca, Luciano; Toth, Laszlo
2016-04-01
The multifractality of the series of magnitudes of the earthquakes occurred in Pannonia region from 2002 to 2012 has been investigated. The shallow (depth less than 40 km) and deep (depth larger than 70 km) seismic catalogues were analysed by using the multifractal detrended fluctuation analysis. The shallow and deep catalogues are characterized by different multifractal properties: (i) the magnitudes of the shallow events are weakly persistent, while those of the deep ones are almost uncorrelated; (ii) the deep catalogue is more multifractal than the shallow one; (iii) the magnitudes of the deep catalogue are characterized by a right-skewed multifractal spectrum, while that of the shallow magnitude is rather symmetric; (iv) a direct relationship between the b-value of the Gutenberg-Richter law and the multifractality of the magnitudes is suggested.
Strong ground motion prediction using virtual earthquakes.
Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C
2014-01-24
Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion. PMID:24458636
Nonlinear site response in medium magnitude earthquakes near Parkfield, California
Rubinstein, Justin L.
2011-01-01
Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Exaggerated Claims About Earthquake Predictions
NASA Astrophysics Data System (ADS)
Kafka, Alan L.; Ebel, John E.
2007-01-01
The perennial promise of successful earthquake prediction captures the imagination of a public hungry for certainty in an uncertain world. Yet, given the lack of any reliable method of predicting earthquakes [e.g., Geller, 1997; Kagan and Jackson, 1996; Evans, 1997], seismologists regularly have to explain news stories of a supposedly successful earthquake prediction when it is far from clear just how successful that prediction actually was. When journalists and public relations offices report the latest `great discovery' regarding the prediction of earthquakes, seismologists are left with the much less glamorous task of explaining to the public the gap between the claimed success and the sober reality that there is no scientifically proven method of predicting earthquakes.
Regional moment - Magnitude relations for earthquakes and explosions
NASA Astrophysics Data System (ADS)
Patton, Howard J.; Walter, William R.
1993-02-01
We present M sub o:m sub b relations using m sub b (P sub n) and m sub b (L sub g) for earthquakes and explosions occurring in tectonic and stable areas. The observations for m sub b (P sub n) range from about 3 to 6 and show excellent separation between earthquakes and explosions on M sub o:m sub b plots, independent of the magnitude. The scatter in M sub o:m sub b observations for NTS explosions is small compared to the earthquake data. The M sub o:m sub b (L sub g) data for Soviet explosions overlay the observations for U.S. explosions. These results, and the small scatter for NTS explosions, suggest weak dependence of M sub o:m sub b relations on emplacement media. A simple theoretical model is developed which matches all these observations. The model uses scaling similarity and conservation of energy to provide a physical link between seismic moment and a broadband seismic magnitude. For U.S. explosions in different media, the material property and apparent stress contributions are shown to compensate for one another, supporting the observations that M sub o:m sub b is nearly independent of source geology.
Does low magnitude earthquake ground shaking cause landslides?
NASA Astrophysics Data System (ADS)
Brain, Matthew; Rosser, Nick; Vann Jones, Emma; Tunstall, Neil
2015-04-01
Estimating the magnitude of coseismic landslide strain accumulation at both local and regional scales is a key goal in understanding earthquake-triggered landslide distributions and landscape evolution, and in undertaking seismic risk assessment. Research in this field has primarily been carried out using the 'Newmark sliding block method' to model landslide behaviour; downslope movement of the landslide mass occurs when seismic ground accelerations are sufficient to overcome shear resistance at the landslide shear surface. The Newmark method has the advantage of simplicity, requiring only limited information on material strength properties, landslide geometry and coseismic ground motion. However, the underlying conceptual model assumes that shear strength characteristics (friction angle and cohesion) calculated using conventional strain-controlled monotonic shear tests are valid under dynamic conditions, and that values describing shear strength do not change as landslide shear strain accumulates. Recent experimental work has begun to question these assumptions, highlighting, for example, the importance of shear strain rate and changes in shear strength properties following seismic loading. However, such studies typically focus on a single earthquake event that is of sufficient magnitude to cause permanent strain accumulation; by doing so, they do not consider the potential effects that multiple low-magnitude ground shaking events can have on material strength. Since such events are more common in nature relative to high-magnitude shaking events, it is important to constrain their geomorphic effectiveness. Using an experimental laboratory approach, we present results that address this key question. We used a bespoke geotechnical testing apparatus, the Dynamic Back-Pressured Shear Box (DynBPS), that uniquely permits more realistic simulation of earthquake ground-shaking conditions within a hillslope. We tested both cohesive and granular materials, both of which
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.
Predicting Predictable: Accuracy and Reliability of Earthquake Forecasts
NASA Astrophysics Data System (ADS)
Kossobokov, V. G.
2014-12-01
Earthquake forecast/prediction is an uncertain profession. The famous Gutenberg-Richter relationship limits magnitude range of prediction to about one unit. Otherwise, the statistics of outcomes would be related to the smallest earthquakes and may be misleading when attributed to the largest earthquakes. Moreover, the intrinsic uncertainty of earthquake sizing allows self-deceptive picking of justification "just from below" the targeted magnitude range. This might be important encouraging evidence but, by no means, can be a "helpful" additive to statistics of a rigid testing that determines reliability and efficiency of a farecast/prediction method. Usually, earthquake prediction is classified in respect to expectation time while overlooking term-less identification of earthquake prone areas, as well as spatial accuracy. The forecasts are often made for a "cell" or "seismic region" whose area is not linked to the size of target earthquakes. This might be another source for making a wrong choice in parameterization of an forecast/prediction method and, eventually, for unsatisfactory performance in a real-time application. Summing up, prediction of time and location of an earthquake of a certain magnitude range can be classified into categories listed in the Table below - Classification of earthquake prediction accuracy Temporal, in years Spatial, in source zone size (L) Long-term 10 Long-range Up to 100 Intermediate-term 1 Middle-range 5-10 Short-term 0.01-0.1 Narrow-range 2-3 Immediate 0.001 Exact 1 Note that a wide variety of possible combinations that exist is much larger than usually considered "short-term exact" one. In principle, such an accurate statement about anticipated seismic extreme might be futile due to the complexities of the Earth's lithosphere, its blocks-and-faults structure, and evidently nonlinear dynamics of the seismic process. The observed scaling of source size and preparation zone with earthquake magnitude implies exponential scales for
Functional shape of the earthquake frequency-magnitude distribution and completeness magnitude
NASA Astrophysics Data System (ADS)
Mignan, A.
2012-08-01
We investigated the functional shape of the earthquake frequency-magnitude distribution (FMD) to identify its dependence on the completeness magnitude Mc. The FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the event number, m the magnitude, exp(-βm) the Gutenberg-Richter law and q(m) a detection function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature of bulk FMDs. Recent results however suggest that this gradual curvature is due to Mc heterogeneities, meaning that the functional shape of the elemental FMD has yet to be described. We propose a detection function of the form q(m) = exp(κ(m - Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models are compared in earthquake catalogs from Southern California and Nevada and in synthetic catalogs. We show that the angular FMD model better describes the elemental FMD and that the sum of elemental angular FMDs leads to the gradually curved bulk FMD. We propose an FMD shape ontology consisting of 5 categories depending on the Mc spatial distribution, from Mc constant to Mc highly heterogeneous: (I) Angular FMD, (II) Intermediary FMD, (III) Intermediary FMD with multiple maxima, (IV) Gradually curved FMD and (V) Gradually curved FMD with multiple maxima. We also demonstrate that the gradually curved FMD model overestimates Mc. This study provides new insights into earthquake detectability properties by using seismicity as a proxy and the means to accurately estimate Mc in any given volume.
VLF study of low magnitude Earthquakes (4.5
NASA Astrophysics Data System (ADS)
Wolbang, Daniel; Biernat, Helfried; Schwingenschuh, Konrad; Eichelberger, Hans; Prattes, Gustav; Besser, Bruno; Boudjada, Mohammed; Rozhnoi, Alexander; Solovieva, Maria; Biagi, Pier Francesco; Friedrich, Martin
2014-05-01
In the course of the European VLF/LF radio receiver network (International Network for Frontier Research on Earthquake Precursors, INFREP), radio signals in the frequency range from 10-50 kilohertz are received, continuously recorded (temporal resolution 20 seconds) and analyzed in the Graz/Austria knot. The radio signals are generated by dedicated distributed transmitters and detected by INFREP receivers in Europe. In case the signal is crossing an earthquake preparation zone, we are in principle able to detect seismic activity if the signal to noise ratio is high enough. The requirements to detect a seismic event with the radio link methods are given by the magnitude M of the Earthquake (EQ), the EQ preparation zone and the Fresnel zone. As pointed out by Rozhnoi et al. (2009), the VLF methods are suitable for earthquakes M>5.0. Furthermore, the VLF/LF radio link gets only disturbed if it is crossing the EQ preparation zone which is described by Molchanov et al. (2008). In the frame of this project I analyze low seismicity EQs (M≤5.6) in south/eastern Europe in the time period 2011-2013. My emphasis is on two seismic events with magnitudes 5.6 and 4.8 which we are not able to adequately characterize using our single parameter VLF method. I perform a fine structure analysis of the residua of various radio links crossing the area around the particular 2 EQs. Depending on the individual paths not all radio links are crossing the EQ preparation zone directly, so a comparative study is possible. As a comparison I analyze with the same method the already good described EQ of L'Aquila/Italy in 2009 with M=6.3 and radio links which are crossing directly the EQ preparation zone. In the course of this project we try to understand in more detail why it is so difficult to detect EQs with 4.5
NASA Astrophysics Data System (ADS)
Kagawa, T.; Irikura, K.; Some, P. G.; Miyake, H.; Sato, T.; Dan, K.; Matsu, S.
2005-12-01
We have studied differences in ground motion according to fault rupture types and magnitude. We found that three diffferent earthquake categories have distinct ground motion characteristics. Somerville (2003) and Kagawa et al. (2004) found that the ground motion caused by subsurface rupture in the period range around one second is larger than predicted by empirical spectral attenuation relations (Abrahamson and Silva, 1997) for all earthquakes, but ground motion from earthquakes that rupture the surface is smaller in the same period range. We expand their study to smaller earthquakes and add several recent earthquakes. WWe began by dividing the earthquakes into four categories that are a combination of two classifications, i.e. defined and undefined fault, surface and subsurface rupture earthquakes. Each category is divided into larger and smaller earthquakes than about Mw 6.5. Eventually, we classified the earthquakes into three groups: a) Surface rupture type : Ground motion is smaller than average, especially in the period range around 1 second. b) Larger subsurface rupture type : Ground motion is larger than average, especially in the period range around 1 second. c) Smaller subsurface rupture type : Ground motion is larger than average, especially in the period range around 0.1 second. Subsurface rupture earthquakes with small magnitude occur in the deep portion of the seismogenic zone. Deep and high stress asperities generate large ground motions in the short period range. They do not generate pulse like ground motions, because the asperity is too small and deep to cause forward directivity effects, and because the radiation and propagation of ground motion at short periods may be too incoherent to allow the formation of a pulse. Larger subsurface rupture earthquakes have larger asperities that may span a large part of the width of the seismogenic zone, producing coherent directivity pulses with periods of 1 second or more. Kagawa et al. (2004) pointed out
Bakun, W.H.
2005-01-01
Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.
Geochemical challenge to earthquake prediction.
Wakita, H
1996-01-01
The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665
Stein, Ross S.
2007-01-01
Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).
Magnitude-frequency relations for earthquakes using a statistical mechanical approach
Rundle, J.B.
1993-12-10
At very small magnitudes, observations indicate that the frequency of occurrence of earthquakes is significantly smaller than the frequency predicted by simple Gutenberg-Richter statistics. Previously, it has been suggested that the dearth of small events is related to a rapid rise in scattering and attenuation at high frequencies and the consequent inability to detect these events with standard arrays of seismometers. However, several recent studies have suggested that instrumentation cannot account for the entire effect and that the decline in frequency may be real. Working from this hypothesis, we derive a magnitude-frequency relation for very small earthquakes that is based upon the postulate that the system of moving plates can be treated as a system not too far removed from equilibrium. As a result, it is assumed that in the steady state, the probability P[E] that a segment of fault has a free energy E is proportional to the exponential of the free energy P {proportional_to} exp[-E / E{sub N}]. In equilibrium statistical mechanics this distribution is called the Boltzmann distribution. The probability weight E{sub N} is the space-time steady state average of the free energy of the segment. Earthquakes are then treated as fluctuations in the free energy of the segments. With these assumptions, it is shown that magnitude-frequency relations can be obtained. For example, previous results obtained by the author can be recovered under the same assumptions as before, for intermediate and large events, the distinction being whether the event is of a linear dimension sufficient to extend the entire width of the brittle zone. Additionally, a magnitude-frequency relation is obtained that is in satisfactory agreement with the data at very small magnitudes. At these magnitudes, departures from frequencies predicted by Gutenberg-Richter statistics are found using a model that accounts for the finite thickness of the inelastic part of the fault zone.
Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong
2014-01-01
More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344
Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong
2014-01-01
More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344
Brief communication "The magnitude 7.2 Bohol earthquake, Philippines"
NASA Astrophysics Data System (ADS)
Lagmay, A. M. F.; Eco, R.
2014-03-01
A devastating earthquake struck Bohol, Philippines on 15 October 2013. The earthquake originated at 12 km depth from an unmapped reverse fault, which manifested on the surface for several kilometers and with maximum vertical displacement of 3 m. The earthquake resulted in 222 fatalities with damage to infrastructure estimated at US52.06 million. Widespread landslides and sinkholes formed in the predominantly limestone region during the earthquake. These remain a significant threat to communities as destabilized hillside slopes, landslide-dammed rivers and incipient sinkholes are still vulnerable to collapse, triggered possibly by aftershocks and heavy rains in the upcoming months of November and December.
NASA Astrophysics Data System (ADS)
Maurer, J.; Segall, P.
2015-12-01
Understanding and predicting earthquake magnitudes from injection-induced seismicity is critically important for estimating hazard due to injection operations. A particular problem has been that the largest event often occurs post shut-in. A rigorous analysis would require modeling all stages of earthquake nucleation, propagation, and arrest, and not just initiation. We present a simple conceptual model for predicting the distribution of earthquake magnitudes during and following injection, building on the analysis of Segall & Lu (2015). The analysis requires several assumptions: (1) the distribution of source dimensions follows a Gutenberg-Richter distribution; (2) in environments where the background ratio of shear to effective normal stress is low, the size of induced events is limited by the volume perturbed by injection (e.g., Shapiro et al., 2013; McGarr, 2014), and (3) the perturbed volume can be approximated by diffusion in a homogeneous medium. Evidence for the second assumption comes from numerical studies that indicate the background ratio of shear to normal stress controls how far an earthquake rupture, once initiated, can grow (Dunham et al., 2011; Schmitt et al., submitted). We derive analytical expressions that give the rate of events of a given magnitude as the product of three terms: the time-dependent rate of nucleations, the probability of nucleating on a source of given size (from the Gutenberg-Richter distribution), and a time-dependent geometrical factor. We verify our results using simulations and demonstrate characteristics observed in real induced sequences, such as time-dependent b-values and the occurrence of the largest event post injection. We compare results to Segall & Lu (2015) as well as example datasets. Future work includes using 2D numerical simulations to test our results and assumptions; in particular, investigating how background shear stress and fault roughness control rupture extent.
Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock
NASA Astrophysics Data System (ADS)
Shcherbakov, R.
2014-12-01
Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and
Earthquakes clustering based on the magnitude and the depths in Molluca Province
Wattimanela, H. J.; Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T.
2015-12-22
In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province.
NASA Astrophysics Data System (ADS)
Fang, Rongxin; Shi, Chuang; Song, Weiwei; Wang, Guangxing; Liu, Jingnan
2014-05-01
For earthquake early warning (EEW) and emergency response, earthquake magnitude is the crucial parameter to be determined rapidly and correctly. However, a reliable and rapid measurement of the magnitude of an earthquake is a challenging problem, especially for large earthquakes (M>8). Here, the magnitude is determined based on the GPS displacement waveform derived from real-time precise point positioning (PPP). The real-time PPP results are evaluated with an accuracy of 1 cm in the horizontal components and 2-3 cm in the vertical components, indicating that the real-time PPP is capable of detecting seismic waves with amplitude of 1cm horizontally and 2-3cm vertically with a confidence level of 95%. In order to estimate the magnitude, the unique information provided by the GPS displacement waveform is the horizontal peak displacement amplitude. We show that the empirical relation of Gutenberg (1945) between peak displacement and magnitude holds up to nearly magnitude 9.0 when displacements are measured with GPS. We tested the proposed method for three large earthquakes. For the 2010 Mw 7.2 El Mayor-Cucapah earthquake, our method provides a magnitude of M7.18±0.18. For the 2011 Mw 9.0 Tohoku-oki earthquake the estimated magnitude is M8.74±0.06, and for the 2010 Mw 8.8 Maule earthquake the value is M8.7±0.1 after excluding some near-field stations. We therefore conclude that depending on the availability of high-rate GPS observations, a robust value of magnitude up to 9.0 for a point source earthquake can be estimated within 10s of seconds or a few minutes after an event using a few GPS stations close to the epicenter. The rapid magnitude could be as a pre-requisite for tsunami early warning, fast source inversion, and emergency response is feasible.
Earthquake prediction: Simple methods for complex phenomena
NASA Astrophysics Data System (ADS)
Luen, Bradley
2010-09-01
Earthquake predictions are often either based on stochastic models, or tested using stochastic models. Tests of predictions often tacitly assume predictions do not depend on past seismicity, which is false. We construct a naive predictor that, following each large earthquake, predicts another large earthquake will occur nearby soon. Because this "automatic alarm" strategy exploits clustering, it succeeds beyond "chance" according to a test that holds the predictions _xed. Some researchers try to remove clustering from earthquake catalogs and model the remaining events. There have been claims that the declustered catalogs are Poisson on the basis of statistical tests we show to be weak. Better tests show that declustered catalogs are not Poisson. In fact, there is evidence that events in declustered catalogs do not have exchangeable times given the locations, a necessary condition for the Poisson. If seismicity followed a stochastic process, an optimal predictor would turn on an alarm when the conditional intensity is high. The Epidemic-Type Aftershock (ETAS) model is a popular point process model that includes clustering. It has many parameters, but is still a simpli_cation of seismicity. Estimating the model is di_cult, and estimated parameters often give a non-stationary model. Even if the model is ETAS, temporal predictions based on the ETAS conditional intensity are not much better than those of magnitude-dependent automatic (MDA) alarms, a much simpler strategy with only one parameter instead of _ve. For a catalog of Southern Californian seismicity, ETAS predictions again o_er only slight improvement over MDA alarms
Microearthquake networks and earthquake prediction
Lee, W.H.K.; Steward, S. W.
1979-01-01
A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.
NASA Astrophysics Data System (ADS)
Mignan, A.
2011-12-01
The capacity of a seismic network to detect small earthquakes can be evaluated by investigating the shape of the Frequency-Magnitude Distribution (FMD) of the resultant earthquake catalogue. The non-cumulative FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the number of events of magnitude m, exp(-βm) the Gutenberg-Richter law and q(m) a probability function. I propose an exponential detection function of the form q(m) = exp(κ(m-Mc)) for m < Mc with Mc the magnitude of completeness, magnitude at which N(m) is maximal. With Mc varying in space due to the heterogeneous distribution of seismic stations in a network, the bulk FMD of an earthquake catalogue corresponds to the sum of local FMDs with respective Mc(x,y), which leads to the gradual curvature of the bulk FMD below max(Mc(x,y)). More complicated FMD shapes are expected if the catalogue is derived from multiple network configurations. The model predictions are verified in the case of Southern California and Nevada. Only slight variations of the detection parameter k = κ/ln(10) are observed within a given region, with k = 3.84 ± 0.66 for Southern California and k = 2.84 ± 0.77 for Nevada, assuming Mc constant in 2° by 2° cells. Synthetic catalogues, which follow the exponential model, can reproduce reasonably well the FMDs observed for Southern California and Nevada by using only c. 15% of the total number of observed events. The proposed model has important implications in Mc mapping procedures and allows use of the full magnitude range for subsequent seismicity analyses.
Earthquake prediction with electromagnetic phenomena
NASA Astrophysics Data System (ADS)
Hayakawa, Masashi
2016-02-01
Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.
Dim prospects for earthquake prediction
NASA Astrophysics Data System (ADS)
Geller, Robert J.
I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].
Modified-Fibonacci-Dual-Lucas method for earthquake prediction
NASA Astrophysics Data System (ADS)
Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.
2015-06-01
The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with
Demographic factors predict magnitude of conditioned fear.
Rosenbaum, Blake L; Bui, Eric; Marin, Marie-France; Holt, Daphne J; Lasko, Natasha B; Pitman, Roger K; Orr, Scott P; Milad, Mohammed R
2015-10-01
There is substantial variability across individuals in the magnitudes of their skin conductance (SC) responses during the acquisition and extinction of conditioned fear. To manage this variability, subjects may be matched for demographic variables, such as age, gender and education. However, limited data exist addressing how much variability in conditioned SC responses is actually explained by these variables. The present study assessed the influence of age, gender and education on the SC responses of 222 subjects who underwent the same differential conditioning paradigm. The demographic variables were found to predict a small but significant amount of variability in conditioned responding during fear acquisition, but not fear extinction learning or extinction recall. A larger differential change in SC during acquisition was associated with more education. Older participants and women showed smaller differential SC during acquisition. Our findings support the need to consider age, gender and education when studying fear acquisition but not necessarily when examining fear extinction learning and recall. Variability in demographic factors across studies may partially explain the difficulty in reproducing some SC findings. PMID:26151498
NASA Astrophysics Data System (ADS)
Fang, Rongxin; Shi, Chuang; Song, Weiwei; Wang, Guangxing; Liu, Jingnan
2014-01-01
For earthquake and tsunami early warning and emergency response, earthquake magnitude is the crucial parameter to be determined rapidly and correctly. However, a reliable and rapid measurement of the magnitude of an earthquake is a challenging problem, especially for large earthquakes (M > 8). Here, the magnitude is determined based on the GPS displacement waveform derived from real-time precise point positioning (RTPPP). RTPPP results are evaluated with an accuracy of 1 cm in the horizontal components and 2-3 cm in the vertical components, indicating that the RTPPP is capable of detecting seismic waves with amplitude of 1 cm horizontally and 2-3 cm vertically with a confidence level of 95 per cent. In order to estimate the magnitude, the unique information provided by the GPS displacement waveform is the horizontal peak displacement amplitude. We show that the empirical relation of Gutenberg (1945) between peak displacement and magnitude holds up to nearly magnitude 9.0 when displacements are measured with GPS. We tested the proposed method for three large earthquakes. For the 2010 Mw 7.2 El Mayor-Cucapah earthquake, our method provides a magnitude of M7.18 ± 0.18. For the 2011 Mw 9.0 Tohoku-oki earthquake the estimated magnitude is M8.74 ± 0.06, and for the 2010 Mw 8.8 Maule earthquake the value is M8.7 ± 0.1 after excluding some near-field stations. We, therefore, conclude that depending on the availability of high-rate GPS observations, a robust value of magnitude up to 9.0 for a point source earthquake can be estimated within tens of seconds or a few minutes after an event using a few GPS stations close to the epicentre. The rapid magnitude could be as a pre-requisite for tsunami early warning, fast source inversion and emergency response is feasible.
The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?
Kilb, Debi; Gomberg, J.
1999-01-01
We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Efficiency test of earthquake prediction around Thessaloniki from electrotelluric precursors
NASA Astrophysics Data System (ADS)
Meyer, K.; Varotsos, P.; Alexopoulos, K.; Nomicos, K.
1985-11-01
Since the completion of the network in January 1983, the electric field of the earth has been continuously monitored at four sites near Thessaloniki, the capital of northern Greece. From the present study and from previous investigations by similar measurements in Greece, it is evident that transient changes of the electrotelluric field occur prior to earthquakes. The analysis of these electric forerunners leads in many cases to a successful prediction of the epicentral area, the magnitude and the time of the impending event. Predictions prior to regional earthquakes are issued and documented with telegrams. From November 1983 until the end of May 1984 twelve earthquakes ( M L > 3.5 ) occurred in the vicinity of Thessaloniki. Ten of these were predicted and warnings given by telegram, whereas two smaller seismic events were missed. Two additional predictions were unsuccessful. Independent of their magnitudes, predicted events took place within a time window of 6 hrs to 6 days after the observations of the electrotelluric anomalies. The accuracy of the predicted epicenters in eight cases is better than 100 km, which corresponds roughly to the mean distance between the electric stations. Magnitude estimates deviate by less than 0.5 magnitude units from the seismically observed ones. Considering the two largest earthquakes, it is shown that the probability of making each of these predictions by chance is of the order of 10 -2.
Source time function properties indicate a strain drop independent of earthquake depth and magnitude
NASA Astrophysics Data System (ADS)
Vallee, Martin
2014-05-01
Movement of the tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other one. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, i.e. the ratio of seismic slip over the dimension of the ruptured fault. SCARDEC, a recently developed method, gives access to this information through the systematic determination of earthquakes source time functions (STFs). STFs describe the integrated spatio-temporal history of the earthquake process, and their maximum value can be related to the amount of stress or strain released during the earthquake. Here I analyse all earthquakes with magnitudes greater than 6 occurring in the last 20 years, and thus provide a catalogue of 1700 STFs which sample all the possible seismic depths. Analysis of this new database reveals that the strain drop remains on average the same for all earthquakes, independent of magnitude and depth. In other words, it is shown that, independent of the earthquake depth, magnitude 6 and larger earthquakes keep on average a similar ratio between seismic slip and dimension of the main slip patch. This invariance implies that deep earthquakes are even more similar than previously thought to their shallow counterparts, a puzzling finding as shallow and deep earthquakes should originate from different physical mechanisms. Concretely, the ratio between slip and patch dimension is on the order of 10-5-10-4, with extreme values only 8 times lower or larger at the 95% confidence interval. Besides the implications for mechanisms of deep earthquake generation, this limited variability has practical implications for realistic earthquake scenarios.
Prediction of earthquake-triggered landslide event sizes
NASA Astrophysics Data System (ADS)
Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy
2016-04-01
Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important
The Magnitude 6.7 Northridge, California, Earthquake of January 17, 1994
NASA Technical Reports Server (NTRS)
Donnellan, A.
1994-01-01
The most damaging earthquake in the United States since 1906 struck northern Los Angeles on January 17.1994. The magnitude 6.7 Northridge earthquake produced a maximum of more than 3 meters of reverse (up-dip) slip on a south-dipping thrust fault rooted under the San Fernando Valley and projecting north under the Santa Susana Mountains.
Earthquake prediction decision and risk matrix
NASA Astrophysics Data System (ADS)
Zou, Qi-Jia
1993-08-01
The issuance of an earthquake prediction must cause widespread social responses. It is suggested and discussed in this paper that the comprehensive decision issue for earthquake prediction considering the factors of the social and economic cost. The method of matrix decision for earthquake prediction (MDEP) is proposed in this paper and it is based on the risk matrix. The goal of decision is that search the best manner issuing earthquake prediction so that minimize the total losses of economy. The establishment and calculation of the risk matrix is discussed, and the decision results taking account of economic factors and not considering the economic factors are compared by examples in this paper.
Chile2015: Lévy Flight and Long-Range Correlation Analysis of Earthquake Magnitudes in Chile
NASA Astrophysics Data System (ADS)
Beccar-Varela, Maria P.; Gonzalez-Huizar, Hector; Mariani, Maria C.; Serpa, Laura F.; Tweneboah, Osei K.
2016-07-01
The stochastic Truncated Lévy Flight model and detrended fluctuation analysis (DFA) are used to investigate the temporal distribution of earthquake magnitudes in Chile. We show that Lévy Flight is appropriated for modeling the time series of the magnitudes of the earthquakes. Furthermore, DFA shows that these events present memory effects, suggesting that the magnitude of impeding earthquakes depends on the magnitude of previous earthquakes. Based on this dependency, we use a non-linear regression to estimate the magnitude of the 2015, M8.3 Illapel earthquake based on the magnitudes of the previous events.
Chile2015: Lévy Flight and Long-Range Correlation Analysis of Earthquake Magnitudes in Chile
NASA Astrophysics Data System (ADS)
Beccar-Varela, Maria P.; Gonzalez-Huizar, Hector; Mariani, Maria C.; Serpa, Laura F.; Tweneboah, Osei K.
2016-06-01
The stochastic Truncated Lévy Flight model and detrended fluctuation analysis (DFA) are used to investigate the temporal distribution of earthquake magnitudes in Chile. We show that Lévy Flight is appropriated for modeling the time series of the magnitudes of the earthquakes. Furthermore, DFA shows that these events present memory effects, suggesting that the magnitude of impeding earthquakes depends on the magnitude of previous earthquakes. Based on this dependency, we use a non-linear regression to estimate the magnitude of the 2015, M8.3 Illapel earthquake based on the magnitudes of the previous events.
Quantitative Earthquake Prediction on Global and Regional Scales
Kossobokov, Vladimir G.
2006-03-23
The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and
The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event
Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.
2003-01-01
The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.
NASA Astrophysics Data System (ADS)
Wang, Z.; Hu, C.
2012-12-01
Maximum magnitude and recurrence interval of the large earthquakes are key parameters for seismic hazard assessment in the central and eastern United States. Determination of these two parameters is quite difficult in the region, however. For example, the estimated maximum magnitudes of the 1811-12 New Madrid sequence are in the range of M6.6 to M8.2, whereas the estimated recurrence intervals are in the range of about 500 to several thousand years. These large variations of maximum magnitude and recurrence interval for the large earthquakes lead to significant variation of estimated seismic hazards in the central and eastern United States. There are several approaches being used to estimate the magnitudes and recurrence intervals, such as historical intensity analysis, geodetic data analysis, and paleo-seismic investigation. We will discuss the approaches that are currently being used to estimate maximum magnitude and recurrence interval of the large earthquakes in the central United States.
A geometric frequency-magnitude scaling transition: Measuring b = 1.5 for large earthquakes
NASA Astrophysics Data System (ADS)
Yoder, Mark R.; Holliday, James R.; Turcotte, Donald L.; Rundle, John B.
2012-04-01
We identify two distinct scaling regimes in the frequency-magnitude distribution of global earthquakes. Specifically, we measure the scaling exponent b = 1.0 for "small" earthquakes with 5.5 < m < 7.6 and b = 1.5 for "large" earthquakes with 7.6 < m < 9.0. This transition at mt = 7.6, can be explained by geometric constraints on the rupture. In conjunction with supporting literature, this corroborates theories in favor of fully self-similar and magnitude independent earthquake physics. We also show that the scaling behavior and abrupt transition between the scaling regimes imply that earthquake ruptures have compact shapes and smooth rupture-fronts.
Maximum earthquake magnitudes along different sections of the North Anatolian fault zone
NASA Astrophysics Data System (ADS)
Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda
2016-04-01
Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.
NASA Astrophysics Data System (ADS)
Suarez, G.; Jiménez, G.
2013-12-01
Two large earthquakes occurred in the Trans Mexican Volcanic Belt (TMVB) in the XXth century. A Mw 6.9 earthquake took place near the town of Acambay in 1912 and in 1920 an event near the city of Jalapa had a magnitude of Mw 6.4. Both events took place in the crust and reflect the tectonic deformation of the TMVB. In addition to these two instrumental earthquakes, the historical record in Mexico, which spans approximately the past 450 years, has a large volume of macroseismic information suggesting the presence crustal earthquakes similar to those that took place in 1912 and 1920. The catalog of macroseismic data in Mexico was carefully reviewed, searching for the presence of crustal events in the TMVB. In total, twelve potential earthquakes were identified. The data was geo-referenced, a magnitude was assigned in the Modified Mercalli Scale (MMS) and events were collated based on the dates reported by the references. The method developed by Bakun and Wentworth (1997) was used to estimate the magnitude and epicentral location of these historical earthquakes. Considering that only two instrumental earthquakes of similar magnitudes exist, it was not possible to construct an attenuation calibration curve of magnitude versus distance. Instead, several published attenuation curves were used. The calibration curve determined for California yielded the best results for both magnitude and epicentral location for the XXth century events. Using this calibration curve, the magnitude and location of several historical events was determined. Our results indicate that over the past 450 years, at least six earthquakes larger than magnitude M 6 have occurred on the TMVB. Three of these, the earthquakes of 1568, 1858 and 1875, appear to have a magnitude larger than M 7. Furthermore, the distribution of these historical earthquakes spans the TMVB in its entirety, and is not restricted to specific areas. The presence of these relatively large, crustal events that take place near the
NASA Astrophysics Data System (ADS)
Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay
2016-06-01
Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.
The magnitude 6.7 Northridge, California, earthquake of 17 January 1994
Jones, L.; Aki, K.; Boore, D.; Celebi, M.; Donnellan, A.; Hall, J.; Harris, R.; Hauksson, E.; Heaton, T.; Hough, S.; Hudnut, K.; Hutton, K.; Johnston, M.; Joyner, W.; Kanamori, H.; Marshall, G.; Michael, A.; Mori, J.; Murray, M.; Ponti, D.; Reasenberg, P.; Schwartz, D.; Seeber, L.; Shakal, A.; Simpson, R.; Thio, H.; Tinsley, J.; Todorovska, M.; Trifunac, M.; Wald, D.; Zoback, M.L.
1994-01-01
The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.
Earthquake Prediction: Is It Better Not to Know?
ERIC Educational Resources Information Center
MOSAIC, 1977
1977-01-01
Discusses economic, social and political consequences of earthquake prediction. Reviews impact of prediction on China's recent (February, 1975) earthquake. Diagrams a chain of likely economic consequences from predicting an earthquake. (CS)
Coseismic and postseismic slip of the 2011 magnitude-9 Tohoku-Oki earthquake.
Ozawa, Shinzaburo; Nishimura, Takuya; Suito, Hisashi; Kobayashi, Tomokazu; Tobita, Mikio; Imakiire, Tetsuro
2011-07-21
Most large earthquakes occur along an oceanic trench, where an oceanic plate subducts beneath a continental plate. Massive earthquakes with a moment magnitude, M(w), of nine have been known to occur in only a few areas, including Chile, Alaska, Kamchatka and Sumatra. No historical records exist of a M(w) = 9 earthquake along the Japan trench, where the Pacific plate subducts beneath the Okhotsk plate, with the possible exception of the ad 869 Jogan earthquake, the magnitude of which has not been well constrained. However, the strain accumulation rate estimated there from recent geodetic observations is much higher than the average strain rate released in previous interplate earthquakes. This finding raises the question of how such areas release the accumulated strain. A megathrust earthquake with M(w) = 9.0 (hereafter referred to as the Tohoku-Oki earthquake) occurred on 11 March 2011, rupturing the plate boundary off the Pacific coast of northeastern Japan. Here we report the distributions of the coseismic slip and postseismic slip as determined from ground displacement detected using a network based on the Global Positioning System. The coseismic slip area extends approximately 400 km along the Japan trench, matching the area of the pre-seismic locked zone. The afterslip has begun to overlap the coseismic slip area and extends into the surrounding region. In particular, the afterslip area reached a depth of approximately 100 km, with M(w) = 8.3, on 25 March 2011. Because the Tohoku-Oki earthquake released the strain accumulated for several hundred years, the paradox of the strain budget imbalance may be partly resolved. This earthquake reminds us of the potential for M(w) ≈ 9 earthquakes to occur along other trench systems, even if no past evidence of such events exists. Therefore, it is imperative that strain accumulation be monitored using a space geodetic technique to assess earthquake potential. PMID:21677648
NASA Astrophysics Data System (ADS)
Ellsworth, W. L.
2015-12-01
Earthquake activity in the central United States has increased dramatically since 2009, principally driven by injection of wastewater coproduced with oil and gas. The elevation of pore pressure from the collective influence of many disposal wells has created an unintended experiment that probes both the state of stress and architecture of the fluid plumbing and fault systems through the earthquakes it induces. These earthquakes primarily release tectonic stress rather than accommodation stresses from injection. Results to date suggest that the aggregated magnitude-frequency distribution (MFD) of these earthquakes differs from natural tectonic earthquakes in the same region for which the b-value is ~1.0. In Kansas, Oklahoma and Texas alone, more than 1100 earthquakes Mw ≥3 occurred between January 2014 and June 2015 but only 32 were Mw ≥ 4 and none were as large as Mw 5. Why is this so? Either the b-value is high (> 1.5) or the magnitude-frequency distribution (MFD) deviates from log-linear form at large magnitude. Where catalogs from local networks are available, such as in southern Kansas, b-values are normal (~1.0) for small magnitude events (M < 3). The deficit in larger-magnitude events could be an artifact of a short observation period, or could reflect a decreased potential for large earthquakes. According to the prevailing paradigm, injection will induce an earthquake when (1) the pressure change encounters a preexisting fault favorably oriented in the tectonic stress field; and (2) the pore-pressure perturbation at the hypocenter is sufficient to overcome the frictional strength of the fault. Most induced earthquakes occur where the injection pressure has attenuated to a small fraction of the seismic stress drop implying that the nucleation point was highly stressed. The population statistics of faults satisfying (1) could be the cause of this MFD if there are many small faults (dimension < 1 km) and few large ones in a critically stressed crust
NASA Astrophysics Data System (ADS)
Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.
2004-12-01
The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better
The energy-magnitude scaling law for M s ≤ 5.5 earthquakes
NASA Astrophysics Data System (ADS)
Wang, Jeen-Hwa
2015-04-01
The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.
How to assess magnitudes of paleo-earthquakes from multiple observations
NASA Astrophysics Data System (ADS)
Hintersberger, Esther; Decker, Kurt
2016-04-01
An important aspect of fault characterisation regarding seismic hazard assessment are paleo-earthquake magnitudes. Especially in regions with low or moderate seismicity, paleo-magnitudes are normally much larger than those of historical earthquakes and therefore provide essential information about seismic potential and expected maximum magnitudes of a certain region. In general, these paleo-earthquake magnitudes are based either on surface rupture length or on surface displacement observed at trenching sites. Several well-established correlations provide the possibility to link the observed surface displacement to a certain magnitude. However, the combination of more than one observation is still rare and not well established. We present here a method based on a probabilistic approach proposed by Biasi and Weldon (2006) to combine several observations to better constrain the possible magnitude range of a paleo-earthquake. Extrapolating the approach of Biasi and Weldon (2006), the single-observation probability density functions (PDF) are assumed to be independent of each other. Following this line, the common PDF for all observed surface displacements generated by one earthquake is the product of all single-displacement PDFs. In order to test our method, we use surface displacement data for modern earthquakes, where magnitudes have been determined by instrumental records. For randomly selected "observations", we calculated the associated PDFs for each "observation point". We then combined the PDFs into one common PDF for an increasing number of "observations". Plotting the most probable magnitudes against the number of combined "observations", the resultant range of most probable magnitudes is very close to the magnitude derived by instrumental methods. Testing our method with real trenching observations, we used the results of a paleoseismological investigation within the Vienna Pull-Apart Basin (Austria), where three trenches were opened along the normal
Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake
NASA Astrophysics Data System (ADS)
Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.
2011-12-01
It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not
Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)
NASA Astrophysics Data System (ADS)
Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos
2010-06-01
The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding
A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes
Pasyanos, M E
2009-11-19
This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.
Predicting the endpoints of earthquake ruptures.
Wesnousky, Steven G
2006-11-16
The active fault traces on which earthquakes occur are generally not continuous, and are commonly composed of segments that are separated by discontinuities that appear as steps in map-view. Stress concentrations resulting from slip at such discontinuities may slow or stop rupture propagation and hence play a controlling role in limiting the length of earthquake rupture. Here I examine the mapped surface rupture traces of 22 historical strike-slip earthquakes with rupture lengths ranging between 10 and 420 km. I show that about two-thirds of the endpoints of strike-slip earthquake ruptures are associated with fault steps or the termini of active fault traces, and that there exists a limiting dimension of fault step (3-4 km) above which earthquake ruptures do not propagate and below which rupture propagation ceases only about 40 per cent of the time. The results are of practical importance to seismic hazard analysis where effort is spent attempting to place limits on the probable length of future earthquakes on mapped active faults. Physical insight to the dynamics of the earthquake rupture process is further gained with the observation that the limiting dimension appears to be largely independent of the earthquake rupture length. It follows that the magnitude of stress changes and the volume affected by those stress changes at the driving edge of laterally propagating ruptures are largely similar and invariable during the rupture process regardless of the distance an event has propagated or will propagate. PMID:17108963
NASA Astrophysics Data System (ADS)
Rong, Y.; Bird, P.; Jackson, D. D.
2016-04-01
The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two
Implications of fault constitutive properties for earthquake prediction.
Dieterich, J H; Kilgore, B
1996-01-01
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666
Bakun, W.H.; Johnston, A.C.; Hopper, M.G.
2003-01-01
We use 28 calibration events (3.7 ??? M ??? 7.3) from Texas to the Grand Banks, Newfoundland, to develop a Modified Mercalli intensity (MMI) model and associated site corrections for estimating source parameters of historical earthquakes in eastern North America. The model, MMI = 1.41 + 1.68 ?? M - 0.00345 ?? ?? - 2.08log (??), where ?? is the distance in kilometers from the epicenter and M is moment magnitude, provides unbiased estimates of M and its uncertainty, and, if site corrections are used, of source location. The model can be used for the analysis of historical earthquakes with only a few MMI assignments. We use this model, MMI site corrections, and Bakun and Wentworth's (1997 technique to estimate M and the epicenter for three important historical earthquakes. The intensity magnitude M1 is 6.1 for the 18 November 1755 earthquake near Cape Ann, Massachusetts; 6.0 for the 5 January 1843 earthquake near Marked Tree, Arkansas; and 6.0 for the 31 October 1895 earthquake. The 1895 event probably occurred in southern Illinois, about 100 km north of the site of significant ground failure effects near Charleston, Missouri.
Mori, J.; Abercrombie, R.E.
1997-01-01
Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.
Fault-Zone Maturity Defines Maximum Earthquake Magnitude: The case of the North Anatolian Fault Zone
NASA Astrophysics Data System (ADS)
Bohnhoff, Marco; Bulut, Fatih; Stierle, Eva; Martinez-Garzon, Patricia; Benzion, Yehuda
2015-04-01
Estimating the maximum likely magnitude of future earthquakes on transform faults near large metropolitan areas has fundamental consequences for the expected hazard. Here we show that the maximum earthquakes on different sections of the North Anatolian Fault Zone (NAFZ) scale with the duration of fault zone activity, cumulative offset and length of individual fault segments. The findings are based on a compiled catalogue of historical earthquakes in the region, using the extensive literary sources that exist due to the long civilization record. We find that the largest earthquakes (M~8) are exclusively observed along the well-developed part of the fault zone in the east. In contrast, the western part is still in a juvenile or transitional stage with historical earthquakes not exceeding M=7.4. This limits the current seismic hazard to NW Turkey and its largest regional population and economical center Istanbul. Our findings for the NAFZ are consistent with data from the two other major transform faults, the San Andreas fault in California and the Dead Sea Transform in the Middle East. The results indicate that maximum earthquake magnitudes generally scale with fault-zone evolution.
Earthquake source inversion of tsunami runup prediction
NASA Astrophysics Data System (ADS)
Sekar, Anusha
Our goal is to study two inverse problems: using seismic data to invert for earthquake parameters and using tide gauge data to invert for earthquake parameters. We focus on the feasibility of using a combination of these inverse problems to improve tsunami runup prediction. A considerable part of the thesis is devoted to studying the seismic forward operator and its modeling using immersed interface methods. We develop an immersed interface method for solving the variable coefficient advection equation in one dimension with a propagating singularity and prove a convergence result for this method. We also prove a convergence result for the one-dimensional acoustic system of partial differential equations solved using immersed interface methods with internal boundary conditions. Such systems form the building blocks of the numerical model for the earthquake. For a simple earthquake-tsunami model, we observe a variety of possibilities in the recovery of the earthquake parameters and tsunami runup prediction. In some cases the data are insufficient either to invert for the earthquake parameters or to predict the runup. When more data are added, we are able to resolve the earthquake parameters with enough accuracy to predict the runup. We expect that this variety will be true in a real world three dimensional geometry as well.
Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake.
Hill, D P; Reasenberg, P A; Michael, A; Arabaz, W J; Beroza, G; Brumbaugh, D; Brune, J N; Castro, R; Davis, S; Depolo, D; Ellsworth, W L; Gomberg, J; Harmsen, S; House, L; Jackson, S M; Johnston, M J; Jones, L; Keller, R; Malone, S; Munguia, L; Nava, S; Pechmann, J C; Sanford, A; Simpson, R W; Smith, R B; Stark, M; Stickney, M; Vidal, A; Walter, S; Wong, V; Zollweg, J
1993-06-11
The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma). PMID:17810202
Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake
Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.
1993-01-01
The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).
NASA Technical Reports Server (NTRS)
Rubin, C. M.
1996-01-01
Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.
NASA Astrophysics Data System (ADS)
Psimoulis, Panos; Dalguer, Luis; Houlie, Nicolas; Zhang, Youbing; Clinton, John; Rothacher, Markus; Giardini, Domenico
2013-04-01
The development of GNSS technology with the potential of high-rate (up to 100Hz) GNSS (GPS, GLONASS, Galileo, Compass) records allows the monitoring of the seismic ground motions. In this study we show the potential of estimating the earthquake magnitude (Mw) and the fault geometry parameters (slip, depth, length, rake, dip, strike) during the propagation of seismic waves based on high-rate GPS network data and using a non-linear inversion algorithm. The examined area is the Valais (South-West Switzerland) where a permanent GPS network of 15 stations (COGEAR and AGNES GPS networks) is operational and where the occurrence of an earthquake of Mw≈6 is possible every 80 years. We test our methodology using synthetic events of magnitude 6.0-6.5 corresponding to normal fault according to most of the fault mechanisms of the area, for surface and buried rupture. The epicentres are located in the Valais close to the epicentre of previous historical earthquakes. For each earthquake, synthetic seismic data (velocity records) of 15 sites, corresponding to the current GPS network sites in Valais, were produced. The synthetic seismic data were integrated into displacement time-series. By jointly using these time-series with the Bernese GNSS Software 5.1 (modified), 10Hz sampling rate GPS records were generated assuming a noise of peak-to-peak amplitudes of ±1cm and ±3cm for the horizontal and for the vertical components, respectively. The GPS records were processed and resulted in kinematic time series from where the seismic displacements were derived and inverted for the magnitude and the fault geometry parameters. The inversion results indicate that it is possible to estimate both, the earthquake magnitudes and the fault geometry parameters in real-time (~10 seconds after the fault rupture). The accuracy of the results depends on the geometry of the GPS network and of the position of the earthquake epicentre.
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
Stress Conditions at the Subduction Zone Inferred from Differential Earthquake Magnitudes
NASA Astrophysics Data System (ADS)
Choy, G. L.; Kirby, S. H.
2011-12-01
Moment magnitude MW and energy magnitude Me describe physically different aspects of the size of an earthquake. Me, being derived from radiated energy ES, is a measure of the seismic potential for damage. MW, being derived from seismic moment Mo, is a measure of the final static displacement of an earthquake. We examine the systematics of thrust earthquakes across the subduction zone environment by deriving differential magnitudes Delta M, where Delta M = Me - MW, of more than 1700 large shallow earthquakes (depth < 70km) that occurred from 1987 to 2010. Although Me may vary by as much as 1 magnitude unit for any given MW, the scatter is not random. Most subduction thrust earthquakes located within a narrow zone at the top surface of the Wadati-Benioff zone (which are interpreted as events on the slab interface) have Delta M < -0.30, a value much lower than the global average of -0.17. Of these interface events, the subset of large earthquakes (MW > 7.0) with anomalously low energy radiation (i.e., Delta M < 0.50) has been associated with the class of tsunamigenic events known as slow earthquakes. However, anomalously low radiated energy was also found for more than 308 earthquakes that had smaller magnitudes (MW < 7.0). The locations of these low-energy earthquakes do not correlate with locations of known slow tsunami earthquakes. On the other hand, anomalously high energy radiation (where Delta M>0.0) was found in 163 thrust events (only 12% of all subduction events). These earthquakes typically occur in high deformation zones that are intraslab, intracrustal or downdip of obliquely convergent plate boundaries. Of these high-energy events, a subset of intraslab events was found that caused local tsunami wave heights. Apparent stress taua can be related to differential magnitude by Delta M = (2/3) [ log(taua/mu) + 4.7] with mu being shear modulus. As specific tectonic settings seem to have characteristic differential magnitude, the relative stress conditions can
Magnitude Problems in Historical Earthquake Catalogs and Their Impact on Seismic Hazard Assessment
NASA Astrophysics Data System (ADS)
Rong, Y.; Mahdyiar, M.; Shen-Tu, B.; Shabestari, K.; Guin, J.
2010-12-01
A reliable historical earthquake catalog is a critical component for any regional seismic hazard analysis. In Europe, a number of historical earthquake catalogs have been compiled and used in constructing national or regional seismic hazard maps, for instance, Switzerland ECOS catalog by Swiss Seismological Service (2002), Italy CPTI catalog by CPTI Working Group (2004), Greece catalog by Papazachos et al. (2007), and CENEC (central, northern and northwestern Europe) catalog by Grünthal et al. (2009), Turkey catalog by Kalafat et al. (2007), and GSHAP catalog by Global Seismic Hazard Assessment Program (1999). These catalogs spatially overlap with each other to a large extent and employed a uniform magnitude scale (Mw). A careful review of these catalogs has revealed significant magnitude problems which can substantially impact regional seismic hazard assessment: 1) Magnitudes for the same earthquakes in different catalogs are discrepant. Such discrepancies are mainly driven by different regression relationships used to convert other magnitude scales or intensity into Mw. One of the consequences is magnitudes of many events in one catalog are systematically biased higher or lower with respect to those in another catalog. For example, the magnitudes of large historical earthquakes in the Italy CPTI catalog are systematically higher than those in Switzerland ECOS catalog. 2) Abnormally high frequency of large magnitude events is observed for some time period that intensities are the main available data. This phenomenon is observed in Italy CPTI catalog for the time period of 1870 to 1930. This may be due to biased conversion from intensity to magnitude. 3) A systematic bias in magnitude resulted in biased estimations for a- and b-values of the Gutenberg-Richter magnitude frequency relationships. It also affected the determination of upper bound magnitudes for various seismic source zones. All of these issues can lead to skewed seismic hazard results, or inconsistent
Foreshocks Are Not Predictive of Future Earthquake Size
NASA Astrophysics Data System (ADS)
Page, M. T.; Felzer, K. R.; Michael, A. J.
2014-12-01
The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. A number of predictions can be made from the cascade model, including the fraction of earthquakes that are foreshocks to larger events, the distribution of differences between foreshock and mainshock magnitudes, and the distribution of time lags between foreshocks and mainshocks. The last should follow the inverse Omori law, which will cause the appearance of an accelerating seismicity rate if multiple foreshock sequences are stacked (Helmstetter and Sornette, 2003). All of these predictions are consistent with observations (Helmstetter and Sornette, 2003; Felzer et al. 2004). If foreshocks were to scale with mainshock size, this would be strong evidence against the cascade model. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results. In particular, we find that their catalogs are not complete to the level assumed, and their ETAS model underestimates inverse Omori behavior. To conclude, seismicity data to date is consistent with the hypothesis that the nucleation process is the same for earthquakes of all sizes.
Fast determination of earthquake magnitude and fault extent from real-time P-wave recordings
NASA Astrophysics Data System (ADS)
Colombelli, Simona; Zollo, Aldo
2015-08-01
This work is aimed at the automatic and fast characterization of the extended earthquake source, through the progressive measurement of the P-wave displacement amplitude along the recorded seismograms. We propose a straightforward methodology to quickly characterize the earthquake magnitude and the expected length of the rupture, and to provide an approximate estimate of the average stress drop to be used for Earthquake Early Warning and rapid response purposes. We test the methodology over a wide distance and magnitude range using a massive Japan earthquake, accelerogram data set. Our estimates of moment magnitude, source duration/length and stress drop are consistent with the ones obtained by using other techniques and analysing the whole seismic waveform. In particular, the retrieved source parameters follow a self-similar, constant stress-drop scaling (median value of stress drop = 0.71 MPa). For the M 9.0, 2011 Tohoku-Oki event, both magnitude and length are underestimated, due to limited, available P-wave time window (PTWs) and to the low-frequency cut-off of analysed data. We show that, in a simulated real-time mode, about 1-2 seconds would be required for the source parameter determination of M 4-5 events, 3-10 seconds for M 6-7 and 30-40 s for M 8-8.5. The proposed method can also provide a rapid evaluation of the average slip on the fault plane, which can be used as an additional discriminant for tsunami potential, associated to large magnitude earthquakes occurring offshore.
Tullis, T E
1996-01-01
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668
Tullis, T E
1996-04-30
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. PMID:11607668
Which data provide the most useful information about maximum earthquake magnitudes?
NASA Astrophysics Data System (ADS)
Zoeller, G.; Holschneider, M.
2013-12-01
In recent publications, it has been shown that earthquake catalogs are useful to estimate the maximum expected earthquake magnitude in a future time horizon Tf. However, earthquake catalogs alone do not allow to estimate the maximum possible magnitude M (Tf = ∞) in a study area. Therefore, we focus on the question, which data might be helpful to constrain M. Assuming a doubly-truncated Gutenberg-Richter law and independent events, optimal estimates of M depend solely on the largest observed magnitude μ regardless of all the other details in the catalog. For other models of the frequency-magnitude relation, this results holds in approximation. We show that the maximum observed magnitude μT in a known time interval T in the past provides provides the most powerful information on M in terms of the smallest confidence intervals. However, if high levels of confidence are required, the upper bound of the confidence interval may diverge. Geological or tectonic data, e.g. strain rates, might be helpful, if μT is not available; but these quantities can only serve as proxies for μT and will always lead to a higher degree of uncertainty and, therefore, to larger confidence intervals of M.
Seismicity dynamics and earthquake predictability
NASA Astrophysics Data System (ADS)
Sobolev, G. A.
2011-02-01
Many factors complicate earthquake sequences, including the heterogeneity and self-similarity of the geological medium, the hierarchical structure of faults and stresses, and small-scale variations in the stresses from different sources. A seismic process is a type of nonlinear dissipative system demonstrating opposing trends towards order and chaos. Transitions from equilibrium to unstable equilibrium and local dynamic instability appear when there is an inflow of energy; reverse transitions appear when energy is dissipating. Several metastable areas of a different scale exist in the seismically active region before an earthquake. Some earthquakes are preceded by precursory phenomena of a different scale in space and time. These include long-term activation, seismic quiescence, foreshocks in the broad and narrow sense, hidden periodical vibrations, effects of the synchronization of seismic activity, and others. Such phenomena indicate that the dynamic system of lithosphere is moving to a new state - catastrophe. A number of examples of medium-term and short-term precursors is shown in this paper. However, no precursors identified to date are clear and unambiguous: the percentage of missed targets and false alarms is high. The weak fluctuations from outer and internal sources play a great role on the eve of an earthquake and the occurrence time of the future event depends on the collective behavior of triggers. The main task is to improve the methods of metastable zone detection and probabilistic forecasting.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement
Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude
Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.
2016-01-01
We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2 Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200 km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2 Hz) accelerograms has value in the context of earthquake early warning for extremely large events.
Practical approaches to earthquake prediction and warning
NASA Astrophysics Data System (ADS)
Kisslinger, Carl
1984-04-01
The title chosen for this renewal of the U.S.-Japan prediction seminar series reflects optimism, perhaps more widespread in Japan than in the United States, that research on earthquake prediction has progressed to a stage at which it is appropriate to begin testing operational forecast systems. This is not to suggest that American researchers do not recognize very substantial gains in understanding earthquake processes and earthquake recurrence, but rather that we are at the point of initiating pilot prediction experiments rather than asserting that we are prepared to start making earthquake predictions in a routine mode.For the sixth time since 1964, with support from the National Science Foundation and the Japan Society for the Promotion of Science, as well as substantial support from the U.S. Geological Survey (U.S.G.S.) for participation of a good representation of its own scientists, earthquake specialists from the two countries came together on November 7-11, 1983, to review progress of the recent past and share ideas about promising directions for future efforts. If one counts the 1980 Ewing symposium on prediction, sponsored by Lamont-Doherty Geological Observatory, which, though multinational, served the same purpose, one finds a continuity in these interchanges that has made them especially productive and stimulating for both scientific communities. The conveners this time were Chris Scholz, Lamont-Doherty, for the United States and Tsuneji Rikitake, Nihon University, for Japan.
Analytical Conditions for Compact Earthquake Prediction Approaches
NASA Astrophysics Data System (ADS)
Sengor, T.
2009-04-01
This paper concerns itself with The atmosphere and ionosphere include non-uniform electric charge and current distributions during the earthquake activity. These charges and currents move irregularly when an activity is scheduled for an earthquake at the future. The electromagnetic characteristics of the region over the earth change to domains where irregular transportations of non-uniform electric charges are observed; therefore, the electromagnetism in the plasma, which moves irregularly and contains non-uniform charge distributions, is studied. These cases of charge distributions are called irregular and non-uniform plasmas. It is called the seismo-plasma if irregular and non-uniform plasma defines a real earthquake activity, which will come to truth. Some signals involving the above-mentioned coupling effects generate some analytical conditions giving the predictability of seismic processes [1]-[5]. These conditions will be discussed in this paper. 2 References [1] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes," IUGG Perugia 2007. [2] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes for Marmara Sea earthquakes," EGU 2008. [3] T. Sengor, "On the exact interaction mechanism of electromagnetically generated phenomena with significant earthquakes and the observations related the exact predictions before the significant earthquakes at July 1999-May 2000 period," Helsinki Univ. Tech. Electrom. Lab. Rept. 368, May 2001. [4] T. Sengor, "The Observational Findings Before The Great Earthquakes Of December 2004 And The Mechanism Extraction From Associated Electromagnetic Phenomena," Book of XXVIIIth URSI GA 2005, pp. 191, EGH.9 (01443) and Proceedings 2005 CD, New Delhi, India, Oct. 23-29, 2005. [5] T. Sengor, "The interaction mechanism among electromagnetic phenomena and geophysical-seismic-ionospheric phenomena with extraction for exact earthquake prediction genetics," 10
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
Seismomagnetic observation during the 8 July 1986 magnitude 5.9 North Palm Springs earthquake
Johnston, M.J.S.; Mueller, R.J.
1987-01-01
A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observations are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.
Lahr, John C.
1999-01-01
This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.
Seismomagnetic observation during the 8 july 1986 magnitude 5.9 north palm springs earthquake.
Johnston, M J; Mueller, R J
1987-09-01
A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observation are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period. PMID:17801644
The role of the Federal government in the Parkfield earthquake prediction experiment
Filson, J.R.
1988-01-01
Earthquake prediction research in the United States us carried out under the aegis of the National Earthquake Hazards Reduction Act of 1977. One of the objectives of the act is "the implementation in all areas of high or moderate seismic risk, of a system (including personnel and procedures) for predicting damaging earthquakes and for identifying, evaluating, and accurately characterizing seismic hazards." Among the four Federal agencies working under the 1977 act, the U.S Geological Survey (USGS) is responsible for earthquake prediction research and technological implementation. The USGS has adopted a goal that is stated quite simply; predict the time, place, and magnitude of damaging earthquakes. The Parkfield earthquake prediction experiment represents the msot concentrated and visible effor to date to test progress toward this goal.
NASA Astrophysics Data System (ADS)
Hawthorne, J. C.; Simons, M.
2013-12-01
The recurrence intervals of repeating earthquakes raise the possibility that much of the slip associated with small earthquakes is aseismic. To test this hypothesis, we examine the co- and post-seismic strain changes associated with Mc 2 to 4 earthquakes on the San Andreas Fault. We consider several thousand events that occurred near USGS strainmeter SJT, at the northern end of the creeping section. Most of the strain changes associated with these events are below the noise level on a single record, so we bin the earthquakes into 3 to 5 groups according to their magnitude. We then invert for an average time history of strain per seismic moment for each group. The seismic moment M0 is assumed to scale as 10β Mc, where Mc is the preferred magnitude in the NCSN catalog, and β is between 1.1 and 1.6. We try several approaches to account for the spatial pattern of strain, but we focus on the ɛE-N strain component (east extension minus north extension) because it is the most robust to model. Each of the estimated strain time series displays a step at the time of the earthquakes. The ratio of the strain step to seismic moment is larger for the bin with smaller events. If we assume that M0~ 101.5Mc, the ratio increases by a factor of 3 to 5 per unit decrease in Mc. This increase in strain per moment would imply that most of the slip within an hour of small events is aseismic. For instance, the aseismic moment of a Mc 2 earthquake would be at least 5 to 10 times the seismic moment. However, much of the variation in strain per seismic moment is eliminated for a smaller but still plausible value of β. If M0~101.2Mc, the strain per moment increases by about a factor of 2 per unit decrease in Mc.
Intermediate- and long-term earthquake prediction.
Sykes, L R
1996-04-30
Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. PMID:11607658
Intermediate- and long-term earthquake prediction.
Sykes, L R
1996-01-01
Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. Images Fig. 1 PMID:11607658
Bakun, W.H.; Scotti, O.
2006-01-01
Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
Earthquake frequency-magnitude distribution and fractal dimension in mainland Southeast Asia
NASA Astrophysics Data System (ADS)
Pailoplee, Santi; Choowong, Montri
2014-12-01
The 2004 Sumatra and 2011 Tohoku earthquakes highlighted the need for a more accurate understanding of earthquake characteristics in both regions. In this study, both the a and b values of the frequency-magnitude distribution (FMD) and the fractal dimension ( D C ) were investigated simultaneously from 13 seismic source zones recognized in mainland Southeast Asia (MLSEA). By using the completeness earthquake dataset, the calculated values of b and D C were found to imply variations in seismotectonic stress. The relationships of D C -b and D C -( a/ b) were investigated to categorize the level of earthquake hazards of individual seismic source zones, where the calibration curves illustrate a negative correlation between the D C and b values ( D c = 2.80 - 1.22 b) and a positive correlation between the D C and a/ b ratios ( D c = 0.27( a/ b) - 0.01) with similar regression coefficients ( R 2 = 0.65 to 0.68) for both regressions. According to the obtained relationships, the Hsenwi-Nanting and Red River fault zones revealed low-stress accumulations. Conversely, the Sumatra-Andaman interplate and intraslab, the Andaman Basin, and the Sumatra fault zone were defined as high-tectonic stress regions that may pose risks of generating large earthquakes in the future.
A moment-tensor catalog for intermediate magnitude earthquakes in Mexico
NASA Astrophysics Data System (ADS)
Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo
2016-04-01
Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism
NASA Astrophysics Data System (ADS)
Decker, Kurt; Beidinger, Andreas; Hintersberger, Esther
2010-05-01
normal faults splaying from the strike-slip system appears to be an important factor controlling fault segmentation. In order to assess MCE magnitudes for this complex tectonic setting on the background of earthquake data spanning a time of only 500 yrs (i.e., shorter than the expected recurrence times of the strongest earthquakes) we choose a deterministic approach using a 3D fault model quantifying the lengths and areas of potential rupture zones. The model accounts for kinematic fault segmentation. Fault surfaces of strike-slip segments vary from 55 km² to more than 400 km², those of the normal splay faults from 100 to 300 km². Empirical relations confirm that these areas are sufficiently large to create earthquakes with M=6.0-6.5. The possibility of even stronger events caused by multi-segments ruptures, however, cannot be excluded at present. The estimated MCE magnitudes are generally in line with newly obtained paleoseismological information from one of the splay faults of the VBTF(Markgrafneusiedl Fault). Preliminary data reveal that single slip events at this fault show surface displacements up to 20 cm compatible with earthquake magnitudes M≥6. Archaeoseismological data indicating a M~6.0-6.3 earthquake at the Lassee strike-slip segment further support the validity of our approach.
NASA Astrophysics Data System (ADS)
Barton, D. J.; Foulger, G. R.; Henderson, J. R.; Julian, B. R.
1999-08-01
Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.
Barton, D.J.; Foulger, G.R.; Henderson, J.R.; Julian, B.R.
1999-01-01
Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.
New data about small-magnitude earthquakes of the ultraslow-spreading Gakkel Ridge, Arctic Ocean
NASA Astrophysics Data System (ADS)
Morozov, Alexey N.; Vaganova, Natalya V.; Ivanova, Ekaterina V.; Konechnaya, Yana V.; Fedorenko, Irina V.; Mikhaylova, Yana A.
2016-01-01
At the present time there is available detailed bathymetry, gravimetric, magnetometer, petrological, and seismic (mb > 4) data for the Gakkel Ridge. However, so far not enough information has been obtained on the distribution of small-magnitude earthquakes (or microearthquakes) within the ridge area due to the absence of a suitable observation system. With the ZFI seismic station (80.8° N, 47.7° E), operating since 2011 at the Frantz Josef Land Archipelago, we can now register small-magnitude earthquakes down to 1.5 ML within the Gakkel Ridge area. This article elaborates on the results and analysis of the ZFI station seismic monitoring obtained for the period from December 2011 to January 2015. In order to improve the accuracy of the earthquakes epicenter locations, velocity models and regional seismic phase travel-times for spreading ridges in areas within the Euro-Arctic Region have been calculated. The Gakkel Ridge is seismically active, regardless of having the lowest spreading velocity among global mid-ocean ridges. Quiet periods alternate with periods of higher seismic activity. Earthquakes epicenters are unevenly spread across the area. Most of the epicenters are assigned to the Sparsely Magmatic Zone, more specifically, to the area between 1.5° E and 19.0° E. We hypothesize that assignment of most earthquakes to the SMZ segment can be explained by the amagmatic character of the spreading of this segment. The structuring of this part of the ridge is characterized by the prevalence of tectonic processes, not magmatic or metamorphic ones.
Stress drop in the sources of intermediate-magnitude earthquakes in northern Tien Shan
NASA Astrophysics Data System (ADS)
Sycheva, N. A.; Bogomolov, L. M.
2014-05-01
The paper is devoted to estimating the dynamical parameters of 14 earthquakes with intermediate magnitudes (energy class 11 to 14), which occurred in the Northern Tien Shan. For obtaining the estimates of these parameters, including the stress drop, which could be then applied in crustal stress reconstruction by the technique suggested by Yu.L. Rebetsky (Schmidt Institute of Physics of the Earth, Russian Academy of Sciences), we have improved the algorithms and programs for calculating the spectra of the seismograms. The updated products allow for the site responses and spectral transformations during the propagation of seismic waves through the medium (the effect of finite Q-factor). By applying the new approach to the analysis of seismograms recorded by the seismic KNET network, we calculated the radii of the sources (Brune radius), scalar seismic moment, and stress drop (release) for the studied 14 earthquakes. The analysis revealed a scatter in the source radii and stress drop even among the earthquakes that have almost identical energy classes. The stress drop by different earthquakes ranges from one to 75 bar. We have also determined the focal mechanisms and stress regime of the Earth's crust. It is worth noting that during the considered period, strong seismic events with energy class above 14 were absent within the segment covered by the KNET stations.
Suárez, Gerardo; Hough, Susan E.
2008-01-01
The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.
Earthquake Magnitude: A Teaching Module for the Spreadsheets Across the Curriculum Initiative
NASA Astrophysics Data System (ADS)
Wetzel, L. R.; Vacher, H. L.
2006-12-01
Spreadsheets Across the Curriculum (SSAC) is a library of computer-based activities designed to reinforce or teach quantitative-literacy or mathematics concepts and skills in context. Each activity (called a "module" in the SSAC project) consists of a PowerPoint presentation with embedded Excel spreadsheets. Each module focuses on one or more problems for students to solve. Each student works through a presentation, thinks about the in-context problem, figures out how to solve it mathematically, and builds the spreadsheets to calculate and examine answers. The emphasis is on mathematical problem solving. The intention is for the in- context problems to span the entire range of subjects where quantitative thinking, number sense, and math non-anxiety are relevant. The self-contained modules aim to teach quantitative concepts and skills in a wide variety of disciplines (e.g., health care, finance, biology, and geology). For example, in the Earthquake Magnitude module students create spreadsheets and graphs to explore earthquake magnitude scales, wave amplitude, and energy release. In particular, students realize that earthquake magnitude scales are logarithmic. Because each step in magnitude represents a 10-fold increase in wave amplitude and approximately a 30-fold increase in energy release, large earthquakes are much more powerful than small earthquakes. The module has been used as laboratory and take-home exercises in small structural geology and solid earth geophysics courses with upper level undergraduates. Anonymous pre- and post-tests assessed students' familiarity with Excel as well as other quantitative skills. The SSAC library consists of 27 modules created by a community of educators who met for one-week "module-making workshops" in Olympia, Washington, in July of 2005 and 2006. The educators designed the modules at the workshops both to use in their own classrooms and to make available for others to adopt and adapt at other locations and in other classes
Instrumental magnitude constraints for the 1889 Chilik and the 1887 Verny earthquake, Central Asia
NASA Astrophysics Data System (ADS)
Krueger, Frank; Kulikova, Galina; Landgraf, Angela
2016-04-01
A series of four large earthquakes hit the continental collision region north of Lake Issyk Kul in the years 1885, 1887, 1889 and 1911 with magnitudes above 6.9. The largest event was the Chilik earthquake on July 11, 1889 with M 8.3 based on macroseismic intensities, recently confirmed by Bindi et al. (2013). Despite the existence of several juvenile fault scarps in the epicentral region no on scale through-going surface rupture has been located. Rupture length of ~200 km and slip of ~10 m are expected for M 8.3 (Blaser et al., 2010). The lack of high concentrated epicentral intensities require a hypocenter depth of 40 km located in the lower crust. Late coda envelope amplitude comparison of modern events in Central Asia recorded at stations in Northern Germany with the reproduction of a Rebeur-Paschwitz pendulum seismogram recorded at Wilhelmshaven results in a magnitude estimate of Mw 8.0-8.5. Amplitude comparison of longperiod surface waves measured on magnetograms at two british geomagnetic observatories favors a magnitude of Mw 8.0. Both can be made consistent if a station site factor of 2-4 for the Wilhelmshaven station is applied (for which indications exist). A truly deep centroid depth (h>40 km) is unlikely (from coda amplitude scaling), a shallow rupture of appropriate length is till now not discovered. Both arguments point to a possible lower crust contribution to the seismic moment. Magnetogram amplitudes for the Jun 8, 1887, Verny earthquake point to a magnitude of M ~7.5-7.6 (preliminary).
New methods for predicting the magnitude of sunspot maximum
NASA Technical Reports Server (NTRS)
Brown, G. M.
1979-01-01
Three new and independent methods of predicting the magnitude of a forthcoming sunspot maximum are suggested. The longest lead time is given by the first method, which is based on a terrestrial parameter measured during the declining phase of the preceding cycle. The second method, with only a slightly shorter foreknowledge, is based on an interplanetary parameter derived around the commencement of the cycle in question (sunspot minimum). The third method, giving the shortest prediction lead-time, is based entirely on solar parameters measured during the initial progress of the cycle in question. Application of all three methods to forecast the magnitude of the next maximum (Cycle 21) agree in predicting that it is likely to be very similar to that of Cycle 18.
Finding the Shadows: Local Variations in the Stress Field due to Large Magnitude Earthquakes
NASA Astrophysics Data System (ADS)
Latimer, C.; Tiampo, K.; Rundle, J.
2009-05-01
Stress shadows, regions of static stress decrease associated with large magnitude earthquake have typically been described through several characteristics or parameters such as location, duration, and size. These features can provide information about the physics of the earthquake itself, as static stress changes are dependent on the following parameters: the regional stress orientations, the coefficient of friction, as well as the depth of interest (King et al, 1994). Areas of stress decrease, associated with a decrease in the seismicity rate, while potentially stable in nature, have been difficult to identify in regions of high rates of background seismicity (Felzer and Brodsky, 2005; Hardebeck et al., 1998). In order to obtain information about these stress shadows, we can determine their characteristics by using the Pattern Informatics (PI) method (Tiampo et al., 2002; Tiampo et al., 2006). The PI method is an objective measure of seismicity rate changes that can be used to locate areas of increases and/or decreases relative to the regional background rate. The latter defines the stress shadows for the earthquake of interest, as seismicity rate changes and stress changes are related (Dieterich et al., 1992; Tiampo et al., 2006). Using the data from the PI method, we can invert for the parameters of the modeled half-space using a genetic algorithm inversion technique. Stress changes will be calculated using coulomb stress change theory (King et al., 1994) and the Coulomb 3 program is used as the forward model (Lin and Stein, 2004; Toda et al., 2005). Changes in the regional stress orientation (using PI results from before and after the earthquake) are of the greatest interest as it is the main factor controlling the pattern of the coulomb stress changes resulting from any given earthquake. Changes in the orientation can lead to conclusions about the local stress field around the earthquake and fault. The depth of interest and the coefficient of friction both
NASA Astrophysics Data System (ADS)
Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.
2015-09-01
We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by
A radon detector for earthquake prediction
NASA Astrophysics Data System (ADS)
Dacey, James
2010-04-01
Recent events in Haiti and Chile remind us of the devastation that can be wrought by an earthquake, especially when it strikes without warning. For centuries, people living in seismically active regions have reported a number of strange occurrences immediately prior to a quake, including unexpected weather phenomena and even unusual behaviour among animals. In more recent times, some scientists have suggested other precursors, such as sporadic bursts of electromagnetic radiation from the fault zone. Unfortunately, none of these suggestions has led to a robust, scientific method for earthquake prediction. Now, however, a group of physicists, led by physics Nobel laureate Georges Charpak, has developed a new detector that could measure one of the more testable earthquake precursors - the suggestion that radon gas is released from fault zones prior to earth slipping, writes James Dacey.
Triggered slip on the Calaveras fault during the magnitude 7. 1 Loma Prieta, California, earthquake
McClellan, P.H.; Hay, E.A.
1990-07-01
After the magnitude (M) 7.1 Loma Prieta earthquake on the San Andreas fault the authors inspected selected sites along the Calaveras fault for evidence of recent surface displacement. In two areas along the Calaveras fault they documented recent right-lateral offsets of cultural features by at least 5 mm within zones of recognized historical creep. The areas are in the city of Hollister and at Highway 152 near San Felipe Lake, located approximately 25 km southeast and 18 km northeast, respectively, of the nearest part of the San Andreas rupture zone. On the basis of geologic evidence the times of the displacement events are constrained to within days or hours of the Loma Prieta mainshock. They conclude that this earthquake on the San Andreas fault triggered surface rupture along at least a 17-km-long segment of the Calaveras fault. These geologic observations extend evidence of triggered slip from instrument stations within this zone of Calaveras fault rupture.
NASA Astrophysics Data System (ADS)
Mojarab, Masoud; Kossobokov, Vladimir; Memarian, Hossein; Zare, Mehdi
2015-07-01
On 23rd October 2011, an M7.3 earthquake near the Turkish city of Van, killed more than 600 people, injured over 4000, and left about 60,000 homeless. It demolished hundreds of buildings and caused great damages to thousand others in Van, Ercis, Muradiye, and Çaldıran. The earthquake's epicenter is located about 70 km from a preceding M7.3 earthquake that occurred in November 1976 and destroyed several villages near the Turkey-Iran border and killed thousands of people. This study, by means of retrospective application of the M8 algorithm, checks to see if the 2011 Van earthquake could have been predicted. The algorithm is based on pattern recognition of Times of Increased Probability (TIP) of a target earthquake from the transient seismic sequence at lower magnitude ranges in a Circle of Investigation (CI). Specifically, we applied a modified M8 algorithm adjusted to a rather low level of earthquake detection in the region following three different approaches to determine seismic transients. In the first approach, CI centers are distributed on intersections of morphostructural lineaments recognized as prone to magnitude 7 + earthquakes. In the second approach, centers of CIs are distributed on local extremes of the seismic density distribution, and in the third approach, CI centers were distributed uniformly on the nodes of a 1∘×1∘ grid. According to the results of the M8 algorithm application, the 2011 Van earthquake could have been predicted in any of the three approaches. We noted that it is possible to consider the intersection of TIPs instead of their union to improve the certainty of the prediction results. Our study confirms the applicability of a modified version of the M8 algorithm for predicting earthquakes at the Iranian-Turkish plateau, as well as for mitigation of damages in seismic events in which pattern recognition algorithms may play an important role.
NASA Astrophysics Data System (ADS)
Hilbert-Wolf, Hannah; Roberts, Eric
2015-04-01
-dkm-scale clastic injection dykes. Our documentation provides evidence for M 6-7.5+ Late Pleistocene earthquakes, similar to the M7.4 earthquake at the same location in 1910, extending the record of large-magnitude earthquakes beyond the last century. Our study not only expands the database of seismogenic sedimentary structures, but also attests to repeated, large-magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms the crust is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift System and other tectonically active, developing regions.
76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 1-day meeting on April 16, 2011. The meeting... the Director of the U.S. Geological Survey on proposed earthquake predictions, on the completeness...
NASA Astrophysics Data System (ADS)
Liu, Jing; Shao, Yanxiu; Xie, Kejia; Klinger, Yann; Lei, Zhongsheng; Yuan, Daoyang
2013-04-01
The active left-lateral Haiyuan fault is one of the major continental strike-slip faults in the Tibetan Plateau. The last large earthquake occurred on the fault is the great 1920 M~8 Haiyuan earthquake with a 230-km-long surface rupture and maximum surface slip of 11 m (Zhang et al., 1987). Much less known is its earthquake recurrence behavior. We present preliminary results on a paleoseismic study at the Salt Lake site, at a shortcut pull-apart basin, within the section that broke in 1920. 3D excavation at the site exposed 7 m of fine-grained and layered stratigraphy and ample evidence of 6-7 paleoseismic events. AMS dating of charcoal fragments constrains that the events occurred during the past 3600 years. Of these, the youngest 3-4 events are recorded in the top 2.5m section of distinctive thinly-layered stratigraphy of alternating reddish well-sorted granule sand and light gray silty fine sand. The section has been deposited since ~1550 A.D., suggesting 3-4 events occurred during the past 400 years, and an average recurrence interval of less than 150 years, surprisingly short for the Haiyuan fault, with a slip rate of arguably ~10 mm/yr or less. A comparison of paleoseismic with historical earthquake record is possible for the Haiyuan area, a region with written accounts of earthquake effects dated back to 1000 A.D.. Between 1600 A.D. and present, each of the four paleoseismic events can be correlated to one historically recorded event, within the uncertainties of paleoseismic age ranges. Nonetheless, these events are definitely not 1920-type large earthquakes, because their shaking effects were only recorded locally, rather than regionally. More and more studies show that M5 to 6 events are capable of causing ground deformation. Our results indicate that it can be misleading to simply use the time between consecutive events as the recurrence interval at a single paleoseismic site, without information of event size. Mixed events of different magnitudes in the
Influence of weak motion data to magnitude dependence of PGA prediction model in Austria
NASA Astrophysics Data System (ADS)
Jia, Yan
2015-04-01
Data recorded by the STS2-sensors at the Austrian Seismic Network were differentiated and used to derive the PGA prediction model for Austria (Jia and Lenhardt, 2010). Before using it to our hazard assessment and real time shakemap, it is necessary to validate this model and obtain a deep understanding about it. In this paper, influence of weak motion data to the magnitude dependence of our prediction model was studied. In addition, spatial PGA residuals between the measurements and predictions were investigated as well. There are 127 earthquakes with a magnitude between 3 and 5.4 that were used to derive the PGA prediction model published in 2011. Unfortunately, 90% of used PGA measurements were made for the events with a magnitude smaller than 4. Only ten quakes among them have a magnitude larger than 4, which is the important magnitude range that needs our attention and hazard assessment. In this investigation, 127 earthquakes were divided into two groups: the first group only includes events with a magnitude smaller than 4, while the second group contains quakes with a magnitude larger than 4. By using the same modeling for estimating PGA attenuation in 2011, coefficients of the model were inverted from the measurements in two groups and compared to the one based on the complete data set. It was found that the group with the weak quakes returned results that only have small differences to the one from all 127 events, while the group with strong quakes (ml> 4) gave greater magnitude dependence than the model published in 2011. The distance coefficients stayed nearly unchanged for all three inversions. As the second step, spatial PGA residuals between the measurements and the predictions from our model were investigated. As explained in Jia and Lenhardt (2013), there are some differences in the site amplifications between the West- and the East-Austria. For a fair comparison, residuals were normalized for each station before the investigation. Then normalized
NASA Astrophysics Data System (ADS)
Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.
2013-05-01
A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity
NASA Astrophysics Data System (ADS)
Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.
2012-12-01
The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.
ERIC Educational Resources Information Center
Walter, Edward J.
1977-01-01
Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)
Is It Possible to Predict Strong Earthquakes?
NASA Astrophysics Data System (ADS)
Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.
2015-07-01
The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.
NASA Technical Reports Server (NTRS)
Suteau, A. M.; Whitcomb, J. H.
1977-01-01
A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.
The 2009 earthquake, magnitude mb 4.8, in the Pantanal Wetlands, west-central Brazil.
Dias, Fábio L; Assumpção, Marcelo; Facincani, Edna M; França, George S; Assine, Mario L; Paranhos, Antônio C; Gamarra, Roberto M
2016-09-01
The main goal of this paper is to characterize the Coxim earthquake occurred in June 15th, 2009 in the Pantanal Basin and to discuss the relationship between its faulting mechanism with the Transbrasiliano Lineament. The earthquake had maximum intensity MM V causing damage in farm houses and was felt in several cities located around, including Campo Grande and Goiânia. The event had an mb 4.8 magnitude and depth was 6 km, i.e., it occurred in the upper crust, within the basement and 5 km below the Cenozoic sedimentary cover. The mechanism, a thrust fault mechanism with lateral motion, was obtained by P-wave first-motion polarities and confirmed by regional waveform modelling. The two nodal planes have orientations (strike/dip) of 300°/55° and 180°/55° and the orientation of the P-axis is approximately NE-SW. The results are similar to the Pantanal earthquake of 1964 with mb 5.4 and NE-SW compressional axis. Both events show that Pantanal Basin is a seismically active area, under compressional stress. The focal mechanism of the 1964 and 2009 events have no nodal plane that could be directly associated with the main SW-NE trending Transbrasiliano system indicating that a direct link of the Transbrasiliano with the seismicity in the Pantanal Basin is improbable. PMID:27580359
NASA Astrophysics Data System (ADS)
Dempsey, David; Suckale, Jenny; Huang, Yihe
2016-05-01
Probabilistic seismic hazard assessment for induced seismicity depends on reliable estimates of the locations, rate, and magnitude frequency properties of earthquake sequences. The purpose of this paper is to investigate how variations in these properties emerge from interactions between an evolving fluid pressure distribution and the mechanics of rupture on heterogeneous faults. We use an earthquake sequence model, developed in the first part of this two-part series, that computes pore pressure evolution, hypocenter locations, and rupture lengths for earthquakes triggered on 1-D faults with spatially correlated shear stress. We first consider characteristic features that emerge from a range of generic injection scenarios and then focus on the 2010-2011 sequence of earthquakes linked to wastewater disposal into two wells near the towns of Guy and Greenbrier, Arkansas. Simulations indicate that one reason for an increase of the Gutenberg-Richter b value for induced earthquakes is the different rates of reduction of static and residual strength as fluid pressure rises. This promotes fault rupture at lower stress than equivalent tectonic events. Further, b value is shown to decrease with time (the induced seismicity analog of b value reduction toward the end of the seismic cycle) and to be higher on faults with lower initial shear stress. This suggests that faults in the same stress field that have different orientations, and therefore different levels of resolved shear stress, should exhibit seismicity with different b-values. A deficit of large-magnitude events is noted when injection occurs directly onto a fault and this is shown to depend on the geometry of the pressure plume. Finally, we develop models of the Guy-Greenbrier sequence that captures approximately the onset, rise and fall, and southwest migration of seismicity on the Guy-Greenbrier fault. Constrained by the migration rate, we estimate the permeability of a 10 m thick critically stressed basement
On the earthquake predictability of fault interaction models
Marzocchi, W; Melini, D
2014-01-01
Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643
Spatial variations in the frequency-magnitude distribution of earthquakes at Mount Pinatubo volcano
Sanchez, J.J.; McNutt, S.R.; Power, J.A.; Wyss, M.
2004-01-01
The frequency-magnitude distribution of earthquakes measured by the b-value is mapped in two and three dimensions at Mount Pinatubo, Philippines, to a depth of 14 km below the summit. We analyzed 1406 well-located earthquakes with magnitudes MD ???0.73, recorded from late June through August 1991, using the maximum likelihood method. We found that b-values are higher than normal (b = 1.0) and range between b = 1.0 and b = 1.8. The computed b-values are lower in the areas adjacent to and west-southwest of the vent, whereas two prominent regions of anomalously high b-values (b ??? 1.7) are resolved, one located 2 km northeast of the vent between 0 and 4 km depth and a second located 5 km southeast of the vent below 8 km depth. The statistical differences between selected regions of low and high b-values are established at the 99% confidence level. The high b-value anomalies are spatially well correlated with low-velocity anomalies derived from earlier P-wave travel-time tomography studies. Our dataset was not suitable for analyzing changes in b-values as a function of time. We infer that the high b-value anomalies around Mount Pinatubo are regions of increased crack density, and/or high pore pressure, related to the presence of nearby magma bodies.
NASA Astrophysics Data System (ADS)
Lin, Min; Zhao, Gang; Wang, Gang
2015-12-01
In this study, recurrence plot (RP) and recurrence quantification analysis (RQA) techniques are applied to a magnitude time series composed of seismic events occurred in California region. Using bootstrapping techniques, we give the statistical test of the RQA for detecting dynamical transitions. From our results, we find the different patterns of RPs for magnitude time series before and after the M6.1 Joshua Tree Earthquake. RQA measurements of determinism (DET) and laminarity (LAM) quantifying the order with confidence levels also show peculiar behaviors. It is found that DET and LAM values of the recurrence-based complexity measure significantly increase to a large value at the main shock, and then gradually recovers to a small values after it. The main shock and its aftershock sequences trigger a temporary growth in order and complexity of the deterministic structure in the RP of seismic activity. It implies that the onset of the strong earthquake event is reflected in a sharp and great simultaneous change in RQA measures.
77 FR 53225 - National Earthquake Prediction Evaluation Council (NEPEC)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-31
... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: Department of the... National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\ day meeting on September 17 and 18, 2012, at the U.S. Geological Survey National Earthquake Information Center (NEIC),...
78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-30
... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey, Interior. ACTION: Notice of meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... proposed earthquake predictions, on the completeness and scientific validity of the available data...
The late Professor Takahiro Hagiwara: His career with earthquake prediction
NASA Astrophysics Data System (ADS)
Ohtake, Masakazu
2004-08-01
Takahiro Hagiwara, Professor Emeritus of the University of Tokyo, was born in 1908, and passed away in 1999. His name is inseparably tied with earthquake prediction, especially as the founder of the earthquake prediction program of Japan, and as a distinguished leader of earthquake prediction research in the world. This short article describes the career of Prof. Hagiwara focusing on his contribution to earthquake prediction research. I also sketch his activities in the development of instruments, and the multi-disciplinary observation of the Matsushiro earthquake swarm to show the starting point of his scientific strategy: good observation.
Verros, G. D.; Latsos, T.; Liolios, C.; Anagnostou, K. E.
2009-08-13
A comprehensive mathematical model for the correlation of geological phenomena such as earthquake magnitude with geochemical measurements is presented in this work. This model is validated against measurements, well established in the literature, of {sup 220}Rn/{sup 222}Rn in the fumarolic gases of the Nisyros Island, Aegean Sea, Greece. It is believed that this model may be further used to develop a generalized methodology for the prediction of geological phenomena such as earthquakes and volcanic eruptions in the vicinity of the Nisyros Island.
NASA Astrophysics Data System (ADS)
Weiser, Deborah Anne
Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.
Location and local magnitude of the Tocopilla earthquake sequence of Northern Chile
NASA Astrophysics Data System (ADS)
Fuenzalida, A.; Lancieri, M.; Madariaga, R. I.; Sobiesiak, M.
2010-12-01
The Northern Chile gap is generally considered to the site of the next megathurst event in Chile. The Tocopilla earthquake of 14 November 2007 (Mw 7.8) and aftershock series broke the southern end of this gap. The Tocopilla event ruptured a narrow strip of 120 km of length and a width that (Peyrat et al.; Delouis et al. 2009) estimated as 30 km. The aftershock sequence comprises five large thrust events with magnitude greater than 6. The main aftershock of Mw 6.7 occurred on November 15, at 15:06 (UTM) seawards of the Mejillones Peninsula. One month later, on December 16 2007, a strong (Mw 6.8) intraplate event with slab-push mechanism occurred near the bottom of the rupture zone. These events represent a unique opportunity for the study of earthquakes in Northern Chile because of the quantity and quality of available data. In the epicentral area, the IPOC network was deployed by GFZ, CNRS/INSU and DGF before the main event. This is a digital, continuously recording network, equipped with both strong-motion and broad-band instrument. On 29 November 2007 a second network named “Task Force” (TF) was deployed by GFZ to study the aftershocks. This is a dense network, installed near the Mejillones peninsula. It is composed by 20 short-period instruments. The slab-push event of 16 december 2007 occurred in the middle of the area covered by the TF network. Aftershocks were detected using an automatic procedure and manually revised in order to pick P and S arrivals. In the 14-28 November period, we detected 635 events recorded at the IPOC network; and a further 552 events were detected between 29 November and 16 December before the slab-push event using the TF network. The events were located using a vertically layered velocity model (Husen et al. 1999), using the NLLoc software of Lomax et al. From the broadband data we estimated the moment magnitude from the displacement spectra of the events. From the short-period instruments we evaluated local magnitudes using the
NASA Astrophysics Data System (ADS)
Lin, J.-Y.; Sibuet, J.-C.; Lee, C.-S.; Hsu, S.-K.; Klingelhoefer, F.
2007-04-01
The relations between the frequency of occurrence and the magnitude of earthquakes are established in the southern Okinawa Trough for 2823 relocated earthquakes recorded during a passive ocean bottom seismometer experiment. Three high b-values areas are identified: (1) for an area offshore of the Ilan Plain, south of the andesitic Kueishantao Island from a depth of 50 km to the surface, thereby confirming the subduction component of the island andesites; (2) for a body lying along the 123.3°E meridian at depths ranging from 0 to 50 km that may reflect the high temperature inflow rising up from a slab tear; (3) for a third cylindrical body about 15 km in diameter beneath the Cross Backarc Volcanic Trail, at depths ranging from 0 to 15 km. This anomaly might be related to the presence of a magma chamber at the base of the crust already evidenced by tomographic and geochemical results. The high b-values are generally linked to magmatic and geothermal activities, although most of the seismicity is linked to normal faulting processes in the southern Okinawa Trough.
Prediction of Earthquakes by Lunar Cicles
NASA Astrophysics Data System (ADS)
Rodriguez, G.
2007-05-01
Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme
An evaluation of the seismic- window theory for earthquake prediction.
McNutt, M.; Heaton, T.H.
1981-01-01
Reports studies designed to determine whether earthquakes in the San Francisco Bay area respond to a fortnightly fluctuation in tidal amplitude. It does not appear that the tide is capable of triggering earthquakes, and in particular the seismic window theory fails as a relevant method of earthquake prediction. -J.Clayton
InSAR constraints on the kinematics and magnitude of the 2001 Bhuj earthquake
NASA Astrophysics Data System (ADS)
Schmidt, D.; Bürgmann, R.
2005-12-01
The Mw 7.6 Bhuj intraplate event occurred along a blind thrust within the Kutch Rift basin of western India in January of 2001. The lack of any surface rupture and limited geodetic data have made it difficult to place the event on a known fault and constrain its source parameters. Moment tensor solutions and aftershock relocations indicate that the earthquake was a reverse event along an east-west striking, south dipping fault. In an effort to image the surface deformation, we have processed a total of 9 interferograms that span the coseismic event. Interferometry has proven difficult for the region because of technical difficulties experienced by the ERS Satellite around the time of the earthquake and because of low coherence. The stabilization of the orbital control by the European Space Agency beginning in 2002 has allowed us to interfere more recent SAR data with pre-earthquake data. Therefore, all available interferograms of the event include the first year of any postseismic deformation. The source region is characterized by broad floodplains interrupted by isolated highlands. Coherence is limited to the surrounding highlands and no data is available directly over the epicenter. Using the InSAR data along two descending and one ascending tracks, we perform a gridded search for the optimal source parameters of the earthquake. The deformation pattern is modeled assuming uniform slip on an elastic dislocation. Since the highland regions are discontinuous, the coherent InSAR phase is isolated to several individual patches. For each iteration of the gridded search algorithm, we optimize the fit to the data by solving for number of 2π phase cycles between coherent patches and the orbital gradient across each interferogram. Since the look angle varies across a SAR scene, a variable unit vector is calculated for each track. Inversion results place the center of the fault plane at 70.33° E/23.42° N at a depth of 21 km, and are consistent with the strike and dip
An earthquake-like magnitude-frequency distribution of slow slip in northern Cascadia
NASA Astrophysics Data System (ADS)
Wech, Aaron G.; Creager, Kenneth C.; Houston, Heidi; Vidale, John E.
2010-11-01
Major episodic tremor and slip (ETS) events with Mw 6.4 to 6.7 repeat every 15 ± 2 months within the Cascadia subduction zone under the Olympic Peninsula. Although these major ETS events are observed to release strain, smaller “tremor swarms” without detectable geodetic deformation are more frequent. An automatic search from 2006-2009 reveals 20,000 five-minute windows containing tremor which cluster in space and time into 96 tremor swarms. The 93 inter-ETS tremor swarms account for 45% of the total duration of tremor detection during the last three ETS cycles. The number of tremor swarms, N, exceeding duration τ follow a power-law distribution N $\\propto$ τ-0.66. If duration is proportional to moment release, the slip inferred from these swarms follows a standard Gutenberg-Richter logarithmic frequency-magnitude relation, with the major ETS events and smaller inter-ETS swarms lying on the same trend. This relationship implies that 1) inter-ETS slip is fundamentally similar to the major events, just smaller and more frequent; and 2) despite fundamental differences in moment-duration scaling, the slow slip magnitude-frequency distribution is the same as normal earthquakes with a b-value of 1.
Material contrast does not predict earthquake rupture propagation direction
Harris, R.A.; Day, S.M.
2005-01-01
Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.
Magnitude Uncertainty and Ground Motion Simulations of the 1811-1812 New Madrid Earthquake Sequence
NASA Astrophysics Data System (ADS)
Ramirez Guzman, L.; Graves, R. W.; Olsen, K. B.; Boyd, O. S.; Hartzell, S.; Ni, S.; Somerville, P. G.; Williams, R. A.; Zhong, J.
2011-12-01
We present a study of a set of three-dimensional earthquake simulation scenarios in the New Madrid Seismic Zone (NMSZ). This is a collaboration among three simulation groups with different numerical modeling approaches and computational capabilities. The study area covers a portion of the Central United States (~400,000 km2) centered on the New Madrid seismic zone, which includes several metropolitan areas such as Memphis, TN and St. Louis, MO. We computed synthetic seismograms to a frequency of 1Hz by using a regional 3D velocity model (Ramirez-Guzman et al., 2010), two different kinematic source generation approaches (Graves et al., 2010; Liu et al., 2006) and one methodology where sources were generated using dynamic rupture simulations (Olsen et al., 2009). The set of 21 hypothetical earthquakes included different magnitudes (Mw 7, 7.6 and 7.7) and epicenters for two faults associated with the seismicity trends in the NMSZ: the Axial (Cottonwood Grove) and the Reelfoot faults. Broad band synthetic seismograms were generated by combining high frequency synthetics computed in a one-dimensional velocity model with the low frequency motions at a crossover frequency of 1 Hz. Our analysis indicates that about 3 to 6 million people living near the fault ruptures would experience Mercalli intensities from VI to VIII if events similar to those of the early nineteenth century occurred today. In addition, the analysis demonstrates the importance of 3D geologic structures, such as the Reelfoot Rift and the Mississippi Embayment, which can channel and focus the radiated wave energy, and rupture directivity effects, which can strongly amplify motions in the forward direction of the ruptures. Both of these effects have a significant impact on the pattern and level of the simulated intensities, which suggests an increased uncertainty in the magnitude estimates of the 1811-1812 sequence based only on historic intensity reports. We conclude that additional constraints such as
Current affairs in earthquake prediction in Japan
NASA Astrophysics Data System (ADS)
Uyeda, Seiya
2015-12-01
As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.
A Study of Low-Frequency Earthquake Magnitudes in Northern Vancouver Island
NASA Astrophysics Data System (ADS)
Chuang, L. Y.; Bostock, M. G.
2015-12-01
Tectonic tremor and low frequency earthquakes (LFE) have been extensively studied in recent years in northern Washington and southern Vancouver Island (VI). However, far less attention has been directed to northern VI where the behavior of tremor and LFEs is less well documented. We investigate LFE properties in this latter region by assembling templates using data from the POLARIS-NVI and Sea-JADE experiments. The POLARIS-NVI experiment comprised 27 broadband seismometers arranged along two mutually perpendicular arms with an aperture of ~60 km centered near station WOS (lat. 50.16, lon. -126.57). It recorded two ETS events in June 2006 and May 2007, each with duration less than a week. For these two episodes, we constructed 68 independent, high signal to noise ratio LFE templates representing spatially distinct asperities on the plate boundary in NVI, along with a catalogue of more than 30 thousand detections. A second data set is being prepared for the complementary 2014 Sea-JADE data set. The precisely located LFE templates represent simple direct P-waves and S-waves at many stations thereby enabling magnitude estimation of individual detections. After correcting for radiation pattern, 1-D geometrical spreading, attenuation and free-surface magnification, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single LFE template. LFE magnitudes range up to 2.54, and like southern VI are characterized by high b-values (b~8). In addition, we will quantify LFE moment-duration scaling and compare with southern Vancouver Island where LFE moments appear to be controlled by slip, largely independent of fault area.
NASA Astrophysics Data System (ADS)
Schellart, W. P.; Rawlinson, N.
2013-12-01
The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g. Mariana, Scotia). Here we show how such variability might depend on various subduction zone parameters. We present 24 physical parameters that characterize these subduction zones in terms of their geometry, kinematics, geology and dynamics. We have investigated correlations between these parameters and the maximum recorded moment magnitude (MW) for subduction zone segments in the period 1900-June 2012. The investigations were done for one dataset using a geological subduction zone segmentation (44 segments) and for two datasets (rupture zone dataset and epicenter dataset) using a 200 km segmentation (241 segments). All linear correlations for the rupture zone dataset and the epicenter dataset (|R| = 0.00-0.30) and for the geological dataset (|R| = 0.02-0.51) are negligible-low, indicating that even for the highest correlation the best-fit regression line can only explain 26% of the variance. A comparative investigation of the observed ranges of the physical parameters for subduction segments with MW > 8.5 and the observed ranges for all subduction segments gives more useful insight into the spatial distribution of giant subduction thrust earthquakes. For segments with MW > 8.5 distinct (narrow) ranges are observed for several parameters, most notably the trench-normal overriding plate deformation rate (vOPD⊥, i.e. the relative velocity between forearc and stable far-field backarc), trench-normal absolute trench rollback velocity (vT⊥), subduction partitioning ratio (vSP⊥/vS⊥, the fraction of the subduction velocity that is accommodated by subducting plate motion), subduction thrust dip angle (δST), subduction thrust curvature (CST), and trench curvature angle (
NASA Astrophysics Data System (ADS)
Weatherill, G. A.; Pagani, M.; Garcia, J.
2016-09-01
The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.
NASA Astrophysics Data System (ADS)
Weatherill, G. A.; Pagani, M.; Garcia, J.
2016-06-01
The creation of a magnitude-homogenised catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenising multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins, and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilise this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonise magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonised into moment-magnitude to form a catalogue of more than 562,840 events. This extended catalogue, whilst not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.
NASA Astrophysics Data System (ADS)
Mogi, Kiyoo
1984-11-01
The temporal variation in precursory ground tilt prior to the 1944 Tonankai (Japan) earthquake, which is a great thrust-type earthquake along the Nankai Trough, is discussed using the analysis of data from repeated surveys along short-distance leveling routes. Sato (1970) pointed out that an anomalous tilt occurred one day before the earthquake at Kakegawa near the northern end of the focal region of the earthquake. From the analysis of additional leveling data, Sato's result is re-examined and the temporal change in the ground tilt is deduced for the period of about ten days beginning six days before the earthquake. A remarkable precursory tilt started two or three days before the earthquake. The direction of the precursory tilt was up towards the south (uplift on the southern Nankai Trough side), but the coseismic tilt was up towards the southeast, perpendicular to the strike of the main thrust fault of the Tonankai earthquake. The postseismic tilt was probably opposite of the coseismic tilt. The preseismic tilt is attributed to precursory slip on part of the main fault. If similar precursory deformation occurs before a future earthquake expected to occur in the adjacent Tokai region, the deformation may help predict the time of the Tokai earthquake.
NASA Astrophysics Data System (ADS)
Noda, S.; Yamamoto, S.
2014-12-01
In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.
Campbell, K.W.
1989-01-01
One-hundred and ninety free-field accelerograms recorded on deep soil (>10 m deep) were used to study the near-source scaling characteristics of peak horizontal acceleration for 91 earthquakes (2.5 ??? ML ??? 5.0) located primarily in California. An analysis of residuals based on an additional 171 near-source accelerograms from 75 earthquakes indicated that accelerograms recorded in building basements sited on deep soil have 30 per cent lower acclerations, and that free-field accelerograms recorded on shallow soil (???10 m deep) have 82 per cent higher accelerations than free-field accelerograms recorded on deep soil. An analysis of residuals based on 27 selected strong-motion recordings from 19 earthquakes in Eastern North America indicated that near-source accelerations associated with frequencies less than about 25 Hz are consistent with predictions based on attenuation relationships derived from California. -from Author
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Cappa, F.; Galis, M.; Mai, P. M.
2015-12-01
The assessment of earthquake hazard induced by fluid injection or withdrawal could be advanced by understanding what controls the maximum magnitude of induced seismicity (Mmax) and the conditions leading to aseismic instead of seismic slip. This is particularly critical for the viability of renewable energy extraction through engineered geothermal systems, which aim at enhancing permeability through controlled fault slip. Existing empirical relations and models for Mmax lack a link between rupture size and the characteristics of the triggering stress perturbation based on earthquake physics. We aim at filling this gap by extending results on the nucleation and arrest of dynamic rupture. We previously derived theoretical relations based on fracture mechanics between properties of overstressed nucleation regions (size, shape and overstress level), the ability of dynamic ruptures to either stop spontaneously or run away, and the final size of stopping ruptures. We verified these relations by comparison to 3D dynamic rupture simulations under slip-weakening friction and to laboratory experiments of frictional sliding nucleated by localized stresses. Here, we extend these results to the induced seismicity context by considering the effect of pressure perturbations resulting from fluid injection, evaluated by hydromechanical modeling. We address the following question: given the amplitude and spatial extent of a fluid pressure perturbation, background stress and fracture energy on a fault, does a nucleated rupture stop spontaneously at some distance from the pressure perturbation region or does it grow away until it reaches the limits of the fault? We present fracture mechanics predictions of the rupture arrest length in this context, and compare them to results of 3D dynamic rupture simulations. We also conduct a systematic study of the effect of localized fluid pressure perturbations on faults governed by rate-and-state friction. We investigate whether injection
Reprint of: "Demographic factors predict magnitude of conditioned fear".
Rosenbaum, Blake L; Bui, Eric; Marin, Marie-France; Holt, Daphne J; Lasko, Natasha B; Pitman, Roger K; Orr, Scott P; Milad, Mohammed R
2015-12-01
There is substantial variability across individuals in the magnitudes of their skin conductance (SC) responses during the acquisition and extinction of conditioned fear. To manage this variability, subjects may be matched for demographic variables, such as age, gender and education. However, limited data exist addressing how much variability in conditioned SC responses is actually explained by these variables. The present study assessed the influence of age, gender and education on the SC responses of 222 subjects who underwent the same differential conditioning paradigm. The demographic variables were found to predict a small but significant amount of variability in conditioned responding during fear acquisition, but not fear extinction learning or extinction recall. A larger differential change in SC during acquisition was associated with more education. Older participants and women showed smaller differential SC during acquisition. Our findings support the need to consider age, gender and education when studying fear acquisition but not necessarily when examining fear extinction learning and recall. Variability in demographic factors across studies may partially explain the difficulty in reproducing some SC findings. PMID:26608179
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.
2012-06-20
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
The 2011 magnitude 9.0 Tohoku-Oki earthquake: mosaicking the megathrust from seconds to centuries.
Simons, Mark; Minson, Sarah E; Sladen, Anthony; Ortega, Francisco; Jiang, Junle; Owen, Susan E; Meng, Lingsen; Ampuero, Jean-Paul; Wei, Shengji; Chu, Risheng; Helmberger, Donald V; Kanamori, Hiroo; Hetland, Eric; Moore, Angelyn W; Webb, Frank H
2011-06-17
Geophysical observations from the 2011 moment magnitude (M(w)) 9.0 Tohoku-Oki, Japan earthquake allow exploration of a rare large event along a subduction megathrust. Models for this event indicate that the distribution of coseismic fault slip exceeded 50 meters in places. Sources of high-frequency seismic waves delineate the edges of the deepest portions of coseismic slip and do not simply correlate with the locations of peak slip. Relative to the M(w) 8.8 2010 Maule, Chile earthquake, the Tohoku-Oki earthquake was deficient in high-frequency seismic radiation--a difference that we attribute to its relatively shallow depth. Estimates of total fault slip and surface secular strain accumulation on millennial time scales suggest the need to consider the potential for a future large earthquake just south of this event. PMID:21596953
NASA Astrophysics Data System (ADS)
Gentili, Stefania; Di Giovambattista, Rita
2016-04-01
In this study, we propose an analysis of the earthquake clusters occurred in Italy from 1980 to 2015. In particular, given a strong earthquake, we are interested to identify statistical clues to forecast whether a subsequent strong earthquake will follow. We apply a pattern recognition approach to verify the possible precursors of a following strong earthquake. Part of the analysis is based on the observation of the cluster during the first hours/days after the first large event. The features adopted are, among the others, the number of earthquakes, the radiated energy and the equivalent source area. The other part of the analysis is based on the characteristics of the first strong earthquake, like its magnitude, depth, focal mechanism, the tectonic position of the source zone. The location of the cluster inside the Italia territory is of particular interest. In order to characterize the precursors depending on the cluster type, we used decision trees as classifiers on single precursor separately. The performances of the classification are tested by leave-one-out method. The analysis is done using different time-spans after the first strong earthquake, in order to simulate the increase of information available as time passes during the seismic clusters. The performances are assessed in terms of precision, recall and goodness of the single classifiers and the ROC graph is shown.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Chen, K.-P.; Tsai, Y.-B.; Chang, W.-Y.
2013-10-01
According to Wyss et al. (2000) result indicates that future main earthquakes can be expected along zones characterized by low b values. In this study we combine Benioff strain with global positioning system (GPS) data to estimate the probability of future Mw ≥ 6.0 earthquakes for a grid covering Taiwan. An approach similar to the maximum likelihood method was used to estimate Gutenberg-Richter parameters a and b. The two parameters were then used to estimate the probability of simulating future earthquakes of Mw ≥ 6.0 for each of the 391 grids (grid interval = 0.1°) covering Taiwan. The method shows a high probability of earthquakes in western Taiwan along a zone that extends from Taichung southward to Nantou, Chiayi, Tainan and Kaohsiung. In eastern Taiwan, there also exists a high probability zone from Ilan southward to Hualian and Taitung. These zones are characterized by high earthquake entropy, high maximum shear strain rates, and paths of low b values. A relation between entropy and maximum shear strain rate is also obtained. It indicates that the maximum shear strain rate is about 4.0 times the entropy. The results of this study should be of interest to city planners, especially those concerned with earthquake preparedness. And providing the earthquake insurers to draw up the basic premium.
Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake
NASA Astrophysics Data System (ADS)
Bassett, Dan; Sandwell, David T.; Fialko, Yuri; Watts, Anthony B.
2016-03-01
The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan.
Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake.
Bassett, Dan; Sandwell, David T; Fialko, Yuri; Watts, Anthony B
2016-03-01
The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan. PMID:26935698
NASA Astrophysics Data System (ADS)
Tzanis, A.; Vallianatos, F.
2012-04-01
the G-R law predicts, but also to the interevent time and distance by means of well defined power-laws. We also demonstrate that interevent time and distance are not independent of each other, but also interrelated by means of well defined power-laws. We argue that these relationships are universal and valid for both local and regional tectonic grains and seismicity patterns. Eventually, we argue that the four-dimensional hypercube formed by the joint distribution of earthquake frequency, magnitude, interevent time and interevent distance comprises a generalized distribution of the G-R type which epitomizes the temporal and spatial interdependence of earthquake activity, consistent with expectation for a stationary or evolutionary critical system. Finally, we attempt to discuss the emerging generalized frequency distribution in terms of non-extensive statistical physics. Acknowledgments. This work was partly supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project "Integrated understanding of Seismicity, using innovative methodologies of Fracture Mechanics along with Earthquake and Non-Extensive Statistical Physics - Application to the geodynamic system of the Hellenic Arc - SEISMO FEAR HELLARC".
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes
Klein, Fred W.
2002-01-01
Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of
An updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes
NASA Astrophysics Data System (ADS)
Chang, Wen-Yen; Chen, Kuei-Pao; Tsai, Yi-Ben
2016-03-01
The main goal of this study was to develop an updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes that are compatible with the Harvard M w . We hope that such a catalog of earthquakes will provide a fundamental database for definitive studies of the distribution of earthquakes in Taiwan as a function of space, time, and magnitude, as well as for realistic assessments of seismic hazards in Taiwan. In this study, for completeness and consistency, we start with a previously published catalog of earthquakes from 1900 to 2006 with homogenized M w magnitudes. We update the earthquake data through 2014 and supplement the database with 188 additional events for the time period of 1900-1935 that were found in the literature. The additional data resulted in a lower magnitude from M w 5.5-5.0. The broadband-based Harvard M w , United States Geological Survey (USGS) M, and Broadband Array in Taiwan for Seismology (BATS) M w are preferred in this study. Accordingly, we use empirical relationships with the Harvard M w to transform our old converted M w values to new converted M w values and to transform the original BATS M w values to converted BATS M w values. For individual events, the adopted M w is chosen in the following order: Harvard M w > USGS M > converted BATS M w > new converted M w . Finally, we discover that use of the adopted M w removes a data gap at magnitudes greater than or equal to 5.0 in the original catalog during 1985-1991. The new catalog is now complete for M w ≥ 5.0 and significantly improves the quality of data for definitive study of seismicity patterns, as well as for realistic assessment of seismic hazards in Taiwan.
Applications of the gambling score in evaluating earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2010-05-01
This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.
Gambling score in earthquake prediction analysis
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.
2011-03-01
The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.
Time-predictable model applicability for earthquake occurrence in northeast India and vicinity
NASA Astrophysics Data System (ADS)
Panthi, A.; Shanker, D.; Singh, H. N.; Kumar, A.; Paudyal, H.
2011-03-01
Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986) for the period 1906-2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp) and not on the following mainshock (Mf). This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T) of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.
NASA Astrophysics Data System (ADS)
Moussa, Hesham Hussein Mohamed
2008-10-01
Teleseismic Broadband seismograms of P-waves from the May 1990 southern Sudan and the December, 2005 Lake Tanganyika earthquakes; the western branch of the East African Rift System at different azimuths have been investigated on the basis of magnitude spectra. The two earthquakes are the largest shocks in the East African Rift System and its extension in southern Sudan. Focal mechanism solutions along with geological evidences suggest that the first event represents a complex style of the deformation at the intersection of the northern branch of the western branch of the East African Rift and Aswa Shear Zone while the second one represents the current tensional stress on the East African Rift. The maximum average spectral magnitude for the first event is determined to be 6.79 at 4 s period compared to 6.33 at 4 s period for the second event. The other source parameters for the two earthquakes were also estimated. The first event had a seismic moment over fourth that of the second one. The two events are radiated from patches of faults having radii of 13.05 and 7.85 km, respectively. The average displacement and stress drop are estimated to be 0.56 m and 1.65 MPa for the first event and 0.43 m and 2.20 MPa for the second one. The source parameters that describe inhomogeneity of the fault are also determined from the magnitude spectra. These additional parameters are complexity, asperity radius, displacements across the asperity and ambient stress drop. Both events produce moderate rupture complexity. Compared to the second event, the first event is characterized by relatively higher complexity, a low average stress drop and a high ambient stress. A reasonable explanation for the variations in these parameters may suggest variation in the strength of the seismogenic fault which provides the relations between the different source parameters. The values of stress drops and the ambient stresses estimated for both events indicate that these earthquakes are of interplate
Four Examples of Short-Term and Imminent Prediction of Earthquakes
NASA Astrophysics Data System (ADS)
zeng, zuoxun; Liu, Genshen; Wu, Dabin; Sibgatulin, Victor
2014-05-01
We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same
Hilbert-Wolf, Hannah Louise; Roberts, Eric M
2015-01-01
In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601
Hilbert-Wolf, Hannah Louise; Roberts, Eric M.
2015-01-01
In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic ‘megablock complex’ that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-09
....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey. ACTION: Notice of Meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... Government. The Council shall advise the Director of the U.S. Geological Survey on proposed...
Estimating the magnitude of prediction uncertainties for the APLE model
Technology Transfer Automated Retrieval System (TEKTRAN)
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...
Dynamic triggering of low magnitude earthquakes in the Middle American Subduction Zone
NASA Astrophysics Data System (ADS)
Escudero, C. R.; Velasco, A. A.
2010-12-01
We analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. We use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw >7) earthquakes, we first identify local earthquakes that occurred before and after the mainshocks. We then group the local earthquakes within a cluster radius between 75 to 200 km. We obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as local cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Due to lateral variations of the dip along the subducted oceanic plate, we divide the Mexican subduction zone in four segments. We then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes. We identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. We find no depend of seismicity changes due to focal mainshock mechanism.
NASA Astrophysics Data System (ADS)
Batac, Rene C.
2016-02-01
The aftershock records of the magnitude 7.1 earthquake that hit the island of Bohol in central Philippines on 15 October 2013 is investigated in the light of previous results for the Philippines using historical earthquakes. Statistics of interevent distances and interevent times between successive aftershocks recorded for the whole month of October 2013 show marked differences from those of historical earthquakes from two Philippine catalogues of varying periods and completeness levels. In particular, the distributions closely follow only the regimes of the historical distributions that were previously attributed to the strong spatio-temporal correlations. The results therefore suggest that these correlated regimes which emerged naturally from the analyses are strongly dominated by the clustering of aftershock events.
The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China
NASA Astrophysics Data System (ADS)
Chen, Q.; Wang, K.
2009-12-01
Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less
Geotechnical effects of the 2015 magnitude 7.8 Gorkha, Nepal, earthquake and aftershocks
Moss, Robb E S; Thompson, Eric; Kieffer, D Scott; Tiwari, Binod; Hashash, Youssef M A; Acharya, Indra; Adhikari, Basanta; Asimaki, Domniki; Clahan, Kevin B.; Collins, Brian D.; Dahal, Sachindra; Jibson, Randall W.; Khadka, Diwakar; Macdonald, Amy; Madugo, Chris L M; Mason, H Benjamin; Pehlivan, Menzer; Rayamajhi, Deepak; Uprety, Sital
2015-01-01
This article summarizes the geotechnical effects of the 25 April 2015 M 7.8 Gorkha, Nepal, earthquake and aftershocks, as documented by a reconnaissance team that undertook a broad engineering and scientific assessment of the damage and collected perishable data for future analysis. Brief descriptions are provided of ground shaking, surface fault rupture, landsliding, soil failure, and infrastructure performance. The goal of this reconnaissance effort, led by Geotechnical Extreme Events Reconnaissance, is to learn from earthquakes and mitigate hazards in future earthquakes.
NASA Astrophysics Data System (ADS)
Davis, C. A.; Keilis-Borok, V. I.; Kossobokov, V. G.; Soloviev, A.
2012-12-01
There was a missed opportunity for implementing important disaster preparedness measures following an earthquake prediction that was announced as an alarm in mid-2001. This intermediate-term middle-range prediction was the initiation of a chain of alarms that successfully detected the time, region, and magnitude range for the magnitude 9.0 March 11, 2011 Great East Japan Earthquake. The prediction chains were made using an algorithm called M8 and is the latest of many predictions tested worldwide for more than 25 years, the results of which show at least a 70% success rate. The earthquake detection could have been utilized to implement measures and improve earthquake preparedness in advance; unfortunately this was not done, in part due to the predictions' limited distribution and the lack of applying existing methods for using intermediate-term predictions to make decisions for taking action. The resulting earthquake and induced tsunami caused tremendous devastation to north-east Japan. Methods that were known in advance of the predication and further advanced during the prediction timeframe are presented in a scenario describing some possibilities on how the 2001 prediction may have been utilized to reduce significant damage, including damage to the Fukushima nuclear power plant, and to show prudent cost-effective actions can be taken if the prediction certainty is known, but not necessarily high. The purpose of this presentation is to show how the prediction information can be strategically used to enhance disaster preparedness and reduce future impacts from the world's largest earthquakes.
Earthquake prediction: The interaction of public policy and science
Jones, L.M.
1996-01-01
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.
Earthquake prediction: the interaction of public policy and science.
Jones, L M
1996-01-01
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656
Earthquake ground-motion prediction equations for eastern North America
Atkinson, G.M.; Boore, D.M.
2006-01-01
New earthquake ground-motion relations for hard-rock and soil sites in eastern North America (ENA), including estimates of their aleatory uncertainty (variability) have been developed based on a stochastic finite-fault model. The model incorporates new information obtained from ENA seismographic data gathered over the past 10 years, including three-component broadband data that provide new information on ENA source and path effects. Our new prediction equations are similar to the previous ground-motion prediction equations of Atkinson and Boore (1995), which were based on a stochastic point-source model. The main difference is that high-frequency amplitudes (f ??? 5 Hz) are less than previously predicted (by about a factor of 1.6 within 100 km), because of a slightly lower average stress parameter (140 bars versus 180 bars) and a steeper near-source attenuation. At frequencies less than 5 Hz, the predicted ground motions from the new equations are generally within 25% of those predicted by Atkinson and Boore (1995). The prediction equations agree well with available ENA ground-motion data as evidenced by near-zero average residuals (within a factor of 1.2) for all frequencies, and the lack of any significant residual trends with distance. However, there is a tendency to positive residuals for moderate events at high frequencies in the distance range from 30 to 100 km (by as much as a factor of 2). This indicates epistemic uncertainty in the prediction model. The positive residuals for moderate events at < 100 km could be eliminated by an increased stress parameter, at the cost of producing negative residuals in other magnitude-distance ranges; adjustment factors to the equations are provided that may be used to model this effect.
Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i
Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.
2013-01-01
Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.
Rizza, M.; Ritz, J.-F.; Braucher, R.; Vassallo, R.; Prentice, C.; Mahan, S.; McGill, S.; Chauvet, A.; Marco, S.; Todbileg, M.; Demberel, S.; Bourles, D.
2011-01-01
We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans-particularly well preserved in the arid environment of the Gobi region-allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is ~1 mm yr-1 along the WIB and EIB segments and ~0.5 mm yr-1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78-7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of ~2500-5200 yr for past
Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia; Shumway, Robert
2009-01-01
Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleigh M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the
The marine-geological fingerprint of the 2011 Magnitude 9 Tohoku-oki earthquake
NASA Astrophysics Data System (ADS)
Strasser, M.; Ikehara, K.; Usami, K.; Kanamatsu, T.; McHugh, C. M.
2015-12-01
The 2011 Tohoku-oki earthquake was the first great subduction zone earthquake, for which the entire activity was recorded by offshore geophysical, seismological and geodetic instruments and for which direct observation for sediment re-suspension and re-deposition was documented across the entire margin. Furthermore, the resulting tsunami and subsequent tragic incident at Fukushima nuclear power station, has induced short-lived radionuclides which can be used for tracer experiments in the natural offshore sedimentary systems. Here we present a summary on the present knowledge on the 2011 event beds in the offshore environment and integrate data from offshore instruments with sedimentological, geochemical and physical property data on core samples to report various types of event deposits resulting from earthquake-triggered submarine landslides, downslope sediment transport by turbidity currents, surficial sediment remobilization from the agitation and resuspension of unconsolidated surface sediments by the earthquake ground motion, as well as tsunami-induced sediment transport from shallow waters to the deep sea. The rapidly growing data set from offshore Tohoku further allows for discussion about (i) what we can learn from this well-documented event for general submarine paleoseismology aspects and (ii) potential of the Japan Trench to use the geological record of the Japan Trench to reconstruct a long-term history of great subduction zone earthquakes.
NASA Astrophysics Data System (ADS)
Hecker, S.; Schwartz, D. P.
2015-12-01
Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.
NASA Astrophysics Data System (ADS)
Lomax, Anthony; Michelini, Alberto
2009-01-01
We present a duration-amplitude procedure for rapid determination of a moment magnitude, Mwpd, for large earthquakes using P-wave recordings at teleseismic distances. Mwpd can be obtained within 20 min or less after the event origin time as the required data are currently available in near real time. The procedure determines apparent source durations, T0, from high-frequency, P-wave records, and estimates moments through integration of broad-band displacement waveforms over the interval tP to tP + T0, where tP is the P-arrival time. We apply the duration-amplitude methodology to 79 recent, large earthquakes (global centroid-moment-tensor magnitude, MCMTw, 6.6-9.3) with diverse source types. The results show that a scaling of the moment estimates for interplate thrust and possibly tsunami earthquakes is necessary to best match MCMTw. With this scaling, Mwpd matches MCMTw typically within +/-0.2 magnitude units, with a standard deviation of σ = 0.11, equaling or outperforming other approaches to rapid magnitude determination. Furthermore, Mwpd does not exhibit saturation; that is, for the largest events, Mwpd does not systematically underestimate MCMTw. The obtained durations and duration-amplitude moments allow rapid estimation of an energy-to-moment parameter Θ* used for identification of tsunami earthquakes. Our results show that Θ* <= -5.7 is an appropriate cut-off for this identification, but also show that neither Θ* nor Mw is a good indicator for tsunamigenic events in general. For these events, we find that a reliable indicator is simply that the duration T0 is greater than about 50 s. The explicit use of the source duration for integration of displacement seismograms, the moment scaling and other characteristics of the duration-amplitude methodology make it an extension of the widely used, Mwp, rapid magnitude procedure. The need for a moment scaling for interplate thrust and possibly tsunami earthquakes may have important implications for the source
Database of potential sources for earthquakes larger than magnitude 6 in Northern California
Working Group on Northern California Earthquake Potential
1996-01-01
The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.
NASA Astrophysics Data System (ADS)
Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.
2012-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast
Gomberg, Joan; Sherrod, Brian; Weaver, Craig; Frankel, Art
2010-01-01
The U.S. Geological Survey and cooperating scientists have recently assessed the effects of a magnitude 7.1 earthquake on the Tacoma Fault Zone in Pierce County, Washington. A quake of comparable magnitude struck the southern Puget Sound region about 1,100 years ago, and similar earthquakes are almost certain to occur in the future. The region is now home to hundreds of thousands of people, who would be at risk from the shaking, liquefaction, landsliding, and tsunamis caused by such an earthquake. The modeled effects of this scenario earthquake will help emergency planners and residents of the region prepare for future quakes.
NASA Astrophysics Data System (ADS)
Yaghmaei-Sabegh, Saman
2015-10-01
This paper presents the development of new and simple empirical models for frequency content prediction of ground-motion records to resolve the assumed limitations on the useable magnitude range of previous studies. Three period values are used in the analysis for describing the frequency content of earthquake ground-motions named as the average spectral period ( T avg), the mean period ( T m), and the smoothed spectral predominant period ( T 0). The proposed models could predict these scalar indicators as function of magnitude, closest site-to-source distance and local site condition. Three site classes as rock, stiff soil, and soft soil has been considered in the analysis. The results of the proposed relationships have been compared with those of other published models. It has been found that the resulting regression equations can be used to predict scalar frequency content estimators over a wide range of magnitudes including magnitudes below 5.5.
Prentice, Carol S.; Rizza, M.; Ritz, J.F.; Baucher, R.; Vassallo, R.; Mahan, S.
2011-01-01
We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into ﬁve main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans—particularly well preserved in the arid environment of the Gobi region—allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is∼1 mm yr–1 along the WIB and EIB segments and∼0.5 mm yr–1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reﬂecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78–7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of∼2500
Strong ground motion prediction for southwestern China from small earthquake records
NASA Astrophysics Data System (ADS)
Tao, Z. R.; Tao, X. X.; Cui, A. P.
2015-09-01
For regions lack of strong ground motion records, a method is developed to predict strong ground motion by small earthquake records from local broadband digital earthquake networks. Sichuan and Yunnan regions, located in southwestern China, are selected as the targets. Five regional source and crustal medium parameters are inversed by micro-Genetic Algorithm. These parameters are adopted to predict strong ground motion for moment magnitude (Mw) 5.0, 6.0 and 7.0. Strong ground motion data are compared with the results, most of the result pass through ideally the data point plexus, except the case of Mw 7.0 in Sichuan region, which shows an obvious slow attenuation. For further application, this result is adopted in probability seismic hazard assessment (PSHA) and near-field strong ground motion synthesis of the Wenchuan Earthquake.