Science.gov

Sample records for earthquake magnitude prediction

  1. A probabilistic neural network for earthquake magnitude prediction.

    PubMed

    Adeli, Hojjat; Panakkat, Ashif

    2009-09-01

    A probabilistic neural network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0. PMID:19502005

  2. Are Earthquake Magnitudes Clustered?

    SciTech Connect

    Davidsen, Joern; Green, Adam

    2011-03-11

    The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.

  3. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries.

    PubMed

    Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

  4. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries

    PubMed Central

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

  5. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data: A New Approach for Predicting Earthquake Magnitudes from Fault Segment Lengths

    NASA Astrophysics Data System (ADS)

    Carpenter, N. S.; Payne, S. J.; Schafer, A. L.

    2011-12-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range faults in the Intermountain Seismic Belt, U.S.A. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths, Lseg, where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements, D, along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M ~ Lseg, should equal M ~ D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating Lseg with surface rupture length, SRL. Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M ~ SRL relationship using Lseg as SRL leads to an underestimation of magnitude and the M ~ Lseg and M ~ D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude, Mw, and length, where length is Lseg instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw ~ Lseg results are strikingly consistent with Mw ~ D calculations using paleoearthquake data for

  6. Neural network models for earthquake magnitude prediction using multiple seismicity indicators.

    PubMed

    Panakkat, Ashif; Adeli, Hojjat

    2007-02-01

    Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region. PMID:17393560

  7. Earthquake prediction

    SciTech Connect

    Ma, Z.; Fu, Z.; Zhang, Y.; Wang, C.; Zhang, G.; Liu, D.

    1989-01-01

    Mainland China is situated at the eastern edge of the Eurasian seismic system and is the largest intra-continental region of shallow strong earthquakes in the world. Based on nine earthquakes with magnitudes ranging between 7.0 and 7.9, the book provides observational data and discusses successes and failures of earthquake prediction. Derived from individual earthquakes, observations of various phenomena and seismic activities occurring before and after earthquakes, led to the establishment of some general characteristics valid for earthquake prediction.

  8. Near-Source Recordings of Small and Large Earthquakes: Magnitude Predictability only for Medium and Small Events

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2015-12-01

    The feasibility of Earthquake Early Warning (EEW) applications has revived the discussion on whether earthquake rupture development follows deterministic principles or not. If it does, it may be possible to predict final earthquake magnitudes while the rupture is still developing. EEW magnitude estimation schemes, most of which are based on 3-4 seconds of near-source p-wave data, have been shown to work well for small to moderate size earthquakes. In this magnitude range, the used time window is larger than the source durations of the events. Whether the magnitude estimation schemes also work for events in which the source duration exceeds the estimation time window, however, remains debated. In our study we have compiled an extensive high-quality data set of near-source seismic recordings. We search for waveform features that could be diagnostic of final event magnitudes in a predictive sense. We find that the onsets of large (M7+) events are statistically indistinguishable from those of medium sized events (M5.5-M7). Significant differences arise only once the medium size events terminate. This observation suggests that EEW relevant magnitude estimates are largely observational, rather than predictive, and that whether a medium size event becomes a large one is not determined at the rupture onset. As a consequence, early magnitude estimates for large events are minimum estimates, a fact that has to be taken into account in EEW alert messaging and response design.

  9. Earthquake prediction

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.

  10. An Energy Rate Magnitude for Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Newman, A. V.; Convers, J. A.

    2008-12-01

    The ability to rapidly assess the approximate size of very large and destructive earthquakes is important for early hazard mitigation from both strong shaking and potential tsunami generation. Using a methodology to rapidly determine earthquake energy and duration using teleseismic high-frequency energy, we develop an adaptation to approximate the magnitude of a very large earthquake before the full duration of rupture can be measured at available teleseismic stations. We utilize available vertical component data to analyze the high-frequency energy growth between 0.5 and 2 Hz, minimizing the effect of later arrivals that are mostly attenuated in this range. Because events smaller than M~6.5 occur rapidly, this method is most adequate for larger events, whose rupture duration exceeds ~20 seconds. Using a catalog of about 200 large and great earthquakes we compare the high-frequency energy rate (· Ehf) to the total broad- band energy (· Ebb) to find a relationship for: Log(· Ehf)/Log(Ebb)≍ 0.85. Hence, combining this relation to the broad-band energy magnitude (Me) [Choy and Boatwright, 1995], yields a new high-frequency energy rate magnitude: M· E=⅔ log10(· Ehf)/0.85-2.9. Such an empirical approach can thus be used to obtain a reasonable assessment of an event magnitude from the initial estimate of energy growth, even before the arrival of the full direct-P rupture signal. For large shallow events thus far examined, the M· E predicts the ultimate Me to within ±0.2 units of M. For fast rupturing deep earthquakes M· E overpredicts, while for slow-rupturing tsunami earthquakes M· E underpredicts Me likely due to material strength changes at the source rupture. We will report on the utility of this method in both research mode, and in real-time scenarios when data availability is limited. Because the high-frequency energy is clearly discernable in real-time, this result suggests that the growth of energy can be used as a good initial indicator of the

  11. Influence of Time and Space Correlations on Earthquake Magnitude

    SciTech Connect

    Lippiello, E.; Arcangelis, L. de; Godano, C.

    2008-01-25

    A crucial point in the debate on the feasibility of earthquake predictions is the dependence of an earthquake magnitude from past seismicity. Indeed, while clustering in time and space is widely accepted, much more questionable is the existence of magnitude correlations. The standard approach generally assumes that magnitudes are independent and therefore in principle unpredictable. Here we show the existence of clustering in magnitude: earthquakes occur with higher probability close in time, space, and magnitude to previous events. More precisely, the next earthquake tends to have a magnitude similar but smaller than the previous one. A dynamical scaling relation between magnitude, time, and space distances reproduces the complex pattern of magnitude, spatial, and temporal correlations observed in experimental seismic catalogs.

  12. The Magnitude and Energy of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2003-12-01

    Several magnitudes were introduced to quantify large earthquakes better and more comprehensive than Ms: Mw (moment magnitude; Kanamori, 1977), ME (strain energy magnitude; Purcaru and Berckhemer, 1978), Mt (tsunami magnitude; Abe, 1979), Mm (mantle magnitude; Okal and Talandier, 1985), Me (seismic energy magnitude; Choy and Boatwright, 1995). Although these magnitudes are still subject to different uncertainties, various kinds of earthquakes can now be better understood in terms or combinations of them. They can also be viewd as mappings of basic source parameters: seismic moment, strain energy, seismic energy, stress drop, under certain assumptions or constraints. We studied a set of about 90 large earthquakes (shallow and deeper) occurred in different tectonic regimes, with more reliable source parameters, and compared them in terms of the above magnitudes. We found large differences between the strain energy (mapped to ME) and seismic energy (mapped to Me), and between ME of events with about the same Mw. This confirms that no 1-to-1 correspondence exists between these magnitudes (Purcaru, 2002). One major cause of differences for "normal" earthquakes is the level of the stress drop over asperities which release and partition the strain energy. We quantify the energetic balance of earthquakes in terms of strain energy Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) using an extended Hamilton's principle. The earthquakes are thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental. The (scaled) strain energy equation we derived is: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, assuming complete stress drop, using the (static) stress drop variability, and that Est and Es are not in a 1-to-1 correspondence. With all uncertainties, our analysis reveal, for a given seismic moment, a large variation of earthquakes in terms of energies, even in the same seismic region. In view of these, for further understanding

  13. Testing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Luen, Brad; Stark, Philip B.

    2008-01-01

    Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify 'chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions - or any other test statistic - is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000-2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions - a common "nonparametric" assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The P-value for the observed success rate is <0.001: The method successfully predicts about 5% of earthquakes, far better than 'chance' because the predictor exploits the clustering of earthquakes - occasional foreshocks - which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.

  14. BNL PREDICTION OF NUPECS FIELD MODEL TESTS OF NPP STRUCTURES SUBJECT TO SMALL TO MODERATE MAGNITUDE EARTHQUAKES.

    SciTech Connect

    XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.

    2003-08-17

    As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure Interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.

  15. BNL PREDICTION OF NUPECS FIELD MODEL TESTS OF NPP STRUCTURES SUBJECT TO SMALL TO MODERATE MAGNITUDE EARTHQUAKES.

    SciTech Connect

    XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.

    2003-08-17

    As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.

  16. Strong motion duration and earthquake magnitude relationships

    SciTech Connect

    Salmon, M.W.; Short, S.A.; Kennedy, R.P.

    1992-06-01

    Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.

  17. The magnitude distribution of dynamically triggered earthquakes

    NASA Astrophysics Data System (ADS)

    Hernandez, Stephen

    Large dynamic strains carried by seismic waves are known to trigger seismicity far from their source region. It is unknown, however, whether surface waves trigger only small earthquakes, or whether they can also trigger large, societally significant earthquakes. To address this question, we use a mixing model approach in which total seismicity is decomposed into 2 broad subclasses: "triggered" events initiated or advanced by far-field dynamic strains, and "untriggered" spontaneous events consisting of everything else. The b-value of a mixed data set, b MIX, is decomposed into a weighted sum of b-values of its constituent components, bT and bU. For populations of earthquakes subjected to dynamic strain, the fraction of earthquakes that are likely triggered, f T, is estimated via inter-event time ratios and used to invert for bT. The confidence bounds on b T are estimated by multiple inversions of bootstrap resamplings of bMIX and fT. For Californian seismicity, data are consistent with a single-parameter Gutenberg-Richter hypothesis governing the magnitudes of both triggered and untriggered earthquakes. Triggered earthquakes therefore seem just as likely to be societally significant as any other population of earthquakes.

  18. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  19. Precise Relative Earthquake Magnitudes from Cross Correlation

    DOE PAGESBeta

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  20. Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2016-04-01

    We have obtained new results in the statistical analysis of global earthquake catalogs with special attention to the largest earthquakes, and we examined the statistical behavior of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the "Global Earthquake Activity Rate 1" model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 by 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalog from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a "General Earthquake Model" (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalog of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalog of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation principle that

  1. Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2016-07-01

    We have obtained new results in the statistical analysis of global earthquake catalogues with special attention to the largest earthquakes, and we examined the statistical behaviour of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the `Global Earthquake Activity Rate 1' model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 × 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalogue from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a `General Earthquake Model' (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalogue of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalogue of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation

  2. A new macroseismic intensity prediction equation and magnitude estimates of the 1811-1812 New Madrid and 1886 Charleston, South Carolina, earthquakes

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2013-12-01

    We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial

  3. Earthquake prediction comes of age

    SciTech Connect

    Lindth, A. . Office of Earthquakes, Volcanoes, and Engineering)

    1990-02-01

    In the last decade, scientists have begun to estimate the long-term probability of major earthquakes along the San Andreas fault. In 1985, the U.S. Geological Survey (USGS) issued the first official U.S. government earthquake prediction, based on research along a heavily instrumented 25-kilometer section of the fault in sparsely populated central California. Known as the Parkfield segment, this section of the Sand Andreas had experienced its last big earthquake, a magnitude 6, in 1966. Estimated probabilities of major quakes along the entire San Andreas by a working group of California earthquake experts, using new geologic data and careful analysis of past earthquakes, are reported.

  4. Scaling relations of moment magnitude, local magnitude, and duration magnitude for earthquakes originated in northeast India

    NASA Astrophysics Data System (ADS)

    Bora, Dipok K.

    2016-06-01

    In this study, we aim to improve the scaling between the moment magnitude ( M W), local magnitude ( M L), and the duration magnitude ( M D) for 162 earthquakes in Shillong-Mikir plateau and its adjoining region of northeast India by extending the M W estimates to lower magnitude earthquakes using spectral analysis of P-waves from vertical component seismograms. The M W- M L and M W- M D relationships are determined by linear regression analysis. It is found that, M W values can be considered consistent with M L and M D, within 0.1 and 0.2 magnitude units respectively, in 90 % of the cases. The scaling relationships investigated comply well with similar relationships in other regions in the world and in other seismogenic areas in the northeast India region.

  5. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  6. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, S.E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  7. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  8. A note on evaluating VAN earthquake predictions

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-Akis; Melis, Nicos S.

    The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

  9. The Earthquake Frequency-Magnitude Distribution Functional Shape

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2012-04-01

    Knowledge of the completeness magnitude Mc, magnitude above which all earthquakes are detected, is a prerequisite to most seismicity analyses. Although computation of Mc is done routinely, different techniques often result in different values. Since an incorrect estimate can lead to under-sampling or worse to an erroneous estimate of the parameters of the Gutenberg-Richter (G-R) law, a better assessment of the deviation from the G-R law and thus of the earthquake detectability is of paramount importance to correctly estimate Mc. This is especially true for refined mapping of seismicity parameters such as in earthquake forecast models. The capacity of a seismic network to detect small earthquakes can be evaluated by investigating the functional shape of the earthquake Frequency-Magnitude Distribution (FMD). The non-cumulative FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the number of events of magnitude m, exp(-βm) the G-R law and q(m) a probability function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature often observed in bulk FMDs. Recent results however show that this gradual curvature is potentially due to spatial heterogeneities in Mc, meaning that the functional shape of the elemental (local) FMD still has to be described. Based on preliminary observations, we propose an exponential detection function of the form q(m) = exp(κ(m-Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models (gradually curved and angular) are compared in Southern California and Nevada. We show that the angular shaped FMD model better describes the elemental FMD and that the sum of elemental FMDs with different Mc(x,y) leads to the gradually curved FMD at the regional scale. We show that the proposed model (1) provides more robust estimates of Mc, (2) better estimates local b-values, and (3) gives an insight into earthquake detectability properties by using seismicity as a proxy

  10. The magnitude distribution of declustered earthquakes in Southern California

    PubMed Central

    Knopoff, Leon

    2000-01-01

    The binned distribution densities of magnitudes in both the complete and the declustered catalogs of earthquakes in the Southern California region have two significantly different branches with crossover magnitude near M = 4.8. In the case of declustered earthquakes, the b-values on the two branches differ significantly from each other by a factor of about two. The absence of self-similarity across a broad range of magnitudes in the distribution of declustered earthquakes is an argument against the application of an assumption of scale-independence to models of main-shock earthquake occurrence, and in turn to the use of such models to justify the assertion that earthquakes are unpredictable. The presumption of scale-independence for complete local earthquake catalogs is attributable, not to a universal process of self-organization leading to future large earthquakes, but to the universality of the process that produces aftershocks, which dominate complete catalogs. PMID:11035770

  11. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  12. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-06-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  13. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  14. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  15. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  16. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    NASA Astrophysics Data System (ADS)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  17. Intermediate-term earthquake prediction.

    PubMed Central

    Keilis-Borok, V I

    1996-01-01

    An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems. Images Fig. 1 Fig. 2 Fig. 4 Fig. 5 PMID:11607660

  18. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  19. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  20. Multiscale mapping of completeness magnitude of earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Vorobieva, Inessa; Narteau, Clement; Shebalin, Peter; Beauducel, François; Nercessian, Alexandre; Clouard, Valérie; Bouin, Marie-Paule

    2013-04-01

    We propose a multiscale method to map spatial variations in completeness magnitude Mc of earthquake catalogs. The Mc value may significantly vary in space due to the change of the seismic network density. Here we suggest a way to use only earthquake catalogs to separate small areas of higher network density (lower Mc) and larger areas of smaller network density (higher Mc). We reduce the analysis of the FMDs to the limited magnitude ranges, thus allowing deviation of the FMD from the log-linearity outside the range. We associate ranges of larger magnitudes with increasing areas for data selection based on constant in average number of completely recorded earthquakes. Then, for each point in space, we document the earthquake frequency-magnitude distribution at all length scales within the corresponding earthquake magnitude ranges. High resolution of the Mc-value is achieved through the determination of the smallest space-magnitude scale in which the Gutenberg-Richter law (i. e. an exponential decay) is verified. The multiscale procedure isolates the magnitude range that meets the best local seismicity and local record capacity. Using artificial catalogs and earthquake catalogs of the Lesser Antilles arc, this Mc mapping method is shown to be efficient in regions with mixed types of seismicity, a variable density of epicenters and various levels of registration.

  1. Source time function properties indicate a strain drop independent of earthquake depth and magnitude.

    PubMed

    Vallée, Martin

    2013-01-01

    The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes. PMID:24126256

  2. On Earthquake Prediction in Japan

    PubMed Central

    UYEDA, Seiya

    2013-01-01

    Japan’s National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  3. On earthquake prediction in Japan.

    PubMed

    Uyeda, Seiya

    2013-01-01

    Japan's National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author's view, are mainly interested in securing funds for seismology - on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  4. Correlating precursory declines in groundwater radon with earthquake magnitude.

    PubMed

    Kuo, T

    2014-01-01

    Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible. PMID:23550908

  5. Earthquake Prediction is Coming

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)

  6. On the macroseismic magnitudes of the largest Italian earthquakes

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Vittori, T.; Mulargia, F.

    1987-07-01

    The macroseismic magnitudes MT of the largest Italian earthquakes ( I0 ⩾ VIII, MCS) have been computed by using the intensity magnitude relationships recently assessed by the authors (1986) for the Italian region. The Progetto Finalizzato Geodinamica (PFG) catalog of the Italian earthquakes, covering the period 1000-1980 (Postpischl, 1985) is the source data base and is reproduced in the Appendix: here the estimated values of MT are given side by side with the catalog macroseismic magnitudes MK i.e. the magnitudes computed according to the Karnik laws (Karnik, 1969). The one-sigma errors Δ MT are also given for each earthquake. The basic aim of the paper is to provide a handy and useful tool to researchers involved in seismicity and seismic-risk studies on Italian territory.

  7. Magnitude-frequency distribution of volcanic explosion earthquakes

    NASA Astrophysics Data System (ADS)

    Nishimura, Takeshi; Iguchi, Masato; Hendrasto, Mohammad; Aoyama, Hiroshi; Yamada, Taishi; Ripepe, Maurizio; Genco, Riccardo

    2016-07-01

    Magnitude-frequency distributions of volcanic explosion earthquakes that are associated with occurrences of vulcanian and strombolian eruptions, or gas burst activity, are examined at six active volcanoes. The magnitude-frequency distribution at Suwanosejima volcano, Japan, shows a power-law distribution, which implies self-similarity in the system, as is often observed in statistical characteristics of tectonic and volcanic earthquakes. On the other hand, the magnitude-frequency distributions at five other volcanoes, Sakurajima and Tokachi-dake in Japan, Semeru and Lokon in Indonesia, and Stromboli in Italy, are well explained by exponential distributions. The statistical features are considered to reflect source size, as characterized by a volcanic conduit or chamber. Earthquake generation processes associated with vulcanian, strombolian and gas burst events are different from those of eruptions ejecting large amounts of pyroclasts, since the magnitude-frequency distribution of the volcanic explosivity index is generally explained by the power law.

  8. Physics-based estimates of maximum magnitude of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin

    2016-04-01

    In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.

  9. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  10. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  11. Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2013-12-01

    earthquakes. The Nuclear Regulation Authority, established in 2012, makes independent decisions based on the latest scientific knowledge. They assigned maximum credible earthquake magnitude of 9.6 for Nankai an Ryukyu troughs, 9.6 for Kuirl-Japan trench, and 9.2 for Izu-Bonin trench.

  12. The parkfield, california, earthquake prediction experiment.

    PubMed

    Bakun, W H; Lindh, A G

    1985-08-16

    Five moderate (magnitude 6) earthquakes with similar features have occurred on the Parkfield section of the San Andreas fault in central California since 1857. The next moderate Parkfield earthquake is expected to occur before 1993. The Parkfield prediction experiment is designed to monitor the details of the final stages of the earthquake preparation process; observations and reports of seismicity and aseismic slip associated with the last moderate Parkfield earthquake in 1966 constitute much of the basis of the design of the experiment. PMID:17739363

  13. The Strain Energy, Seismic Moment and Magnitudes of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2004-12-01

    The strain energy Est, as potential energy, released by an earthquake and the seismic moment Mo are two fundamental physical earthquake parameters. The earthquake rupture process ``represents'' the release of the accumulated Est. The moment Mo, first obtained in 1966 by Aki, revolutioned the quantification of earthquake size and led to the elimination of the limitations of the conventional magnitudes (originally ML, Richter, 1930) mb, Ms, m, MGR. Both Mo and Est, not in a 1-to-1 correspondence, are uniform measures of the size, although Est is presently less accurate than Mo. Est is partitioned in seismic- (Es), fracture- (Eg) and frictional-energy Ef, and Ef is lost as frictional heat energy. The available Est = Es + Eg (Aki and Richards (1980), Kostrov and Das, (1988) for fundamentals on Mo and Est). Related to Mo, Est and Es, several modern magnitudes were defined under various assumptions: the moment magnitude Mw (Kanamori, 1977), strain energy magnitude ME (Purcaru and Berckhemer, 1978), tsunami magnitude Mt (Abe, 1979), mantle magnitude Mm (Okal and Talandier, 1987), seismic energy magnitude Me (Choy and Boatright, 1995, Yanovskaya et al, 1996), body-wave magnitude Mpw (Tsuboi et al, 1998). The available Est = (1/2μ )Δ σ Mo, Δ σ ~=~average stress drop, and ME is % \\[M_E = 2/3(\\log M_o + \\log(\\Delta\\sigma/\\mu)-12.1) ,\\] % and log Est = 11.8 + 1.5 ME. The estimation of Est was modified to include Mo, Δ and μ of predominant high slip zones (asperities) to account for multiple events (Purcaru, 1997): % \\[E_{st} = \\frac{1}{2} \\sum_i {\\frac{1}{\\mu_i} M_{o,i} \\Delta\\sigma_i} , \\sum_i M_{o,i} = M_o \\] % We derived the energy balance of Est, Es and Eg as: % \\[ E_{st}/M_o = (1+e(g,s)) E_s/M_o , e(g,s) = E_g/E_s \\] % We analyzed a set of about 90 large earthquakes and found that, depending on the goal these magnitudes quantify differently the rupture process, thus providing complementary means of earthquake characterization. Results for some

  14. Prediction of earthquake response spectra

    USGS Publications Warehouse

    Joyner, W.B.; Boore, David M.

    1982-01-01

    We have developed empirical equations for predicting earthquake response spectra in terms of magnitude, distance, and site conditions, using a two-stage regression method similar to the one we used previously for peak horizontal acceleration and velocity. We analyzed horizontal pseudo-velocity response at 5 percent damping for 64 records of 12 shallow earthquakes in Western North America, including the recent Coyote Lake and Imperial Valley, California, earthquakes. We developed predictive equations for 12 different periods between 0.1 and 4.0 s, both for the larger of two horizontal components and for the random horizontal component. The resulting spectra show amplification at soil sites compared to rock sites for periods greater than or equal to 0.3 s, with maximum amplification exceeding a factor of 2 at 2.0 s. For periods less than 0.3 s there is slight deamplification at the soil sites. These results are generally consistent with those of several earlier studies. A particularly significant aspect of the predicted spectra is the change of shape with magnitude (confirming earlier results by McGuire and by Irifunac and Anderson). This result indicates that the conventional practice of scaling a constant spectral shape by peak acceleration will not give accurate answers. The Newmark and Hall method of spectral scaling, using both peak acceleration and peak velocity, largely avoids this error. Comparison of our spectra with the Nuclear Regulatory Commission's Regulatory Guide 1.60 spectrum anchored at the same value at 0.1 s shows that the Regulatory Guide 1.60 spectrum is exceeded at soil sites for a magnitude of 7.5 at all distances for periods greater than about 0.5 s. Comparison of our spectra for soil sites with the corresponding ATC-3 curve of lateral design force coefficient for the highest seismic zone indicates that the ATC-3 curve is exceeded within about 7 km of a magnitude 6.5 earthquake and within about 15 km of a magnitude 7.5 event. The amount by

  15. Can we test for the maximum possible earthquake magnitude?

    NASA Astrophysics Data System (ADS)

    Holschneider, M.; Zöller, G.; Clements, R.; Schorlemmer, D.

    2014-03-01

    We explore the concept of maximum possible earthquake magnitude, M, in a region represented by an earthquake catalog from the viewpoint of statistical testing. For this aim, we assume that earthquake magnitudes are independent events that follow a doubly truncated Gutenberg-Richter distribution and focus on the upper truncation M. In earlier work, it has been shown that the value of M cannot be well constrained from earthquake catalogs alone. However, for two hypothesized values M and M', alternative statistical tests may address the question: Which value is more consistent with the data? In other words, is it possible to reject a magnitude within reasonable errors, i.e., the error of the first and the error of the second kind? The results for realistic settings indicate that either the error of the first kind or the error of the second kind is intolerably large. We conclude that it is essentially impossible to infer M in terms of alternative testing with sufficient confidence from an earthquake catalog alone, even in regions like Japan with excellent data availability. These findings are also valid for frequency-magnitude distributions with different tail behavior, e.g., exponential tapering. Finally, we emphasize that different data may only be useful to provide additional constraints for M, if they do not correlate with the earthquake catalog, i.e., if they have not been recorded in the same observational period. In particular, long-term geological assessments might be suitable to reduce the errors, while GPS measurements provide overall the same information as the catalogs.

  16. Application of linear statistical models of earthquake magnitude versus fault length in estimating maximum expectable earthquakes

    USGS Publications Warehouse

    Mark, Robert K.

    1977-01-01

    Correlation or linear regression estimates of earthquake magnitude from data on historical magnitude and length of surface rupture should be based upon the correct regression. For example, the regression of magnitude on the logarithm of the length of surface rupture L can be used to estimate magnitude, but the regression of log L on magnitude cannot. Regression estimates are most probable values, and estimates of maximum values require consideration of one-sided confidence limits.

  17. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  18. The effects of earthquake measurement concepts and magnitude anchoring on individuals' perceptions of earthquake risk

    USGS Publications Warehouse

    Celsi, R.; Wolfinbarger, M.; Wald, D.

    2005-01-01

    The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.

  19. The Road to Convergence in Earthquake Frequency-Magnitude Statistics

    NASA Astrophysics Data System (ADS)

    Naylor, M.; Bell, A. F.; Main, I. G.

    2013-12-01

    The Gutenberg-Richter frequency-magnitude relation is a fundamental empirical law of seismology, but its form remains uncertain for rare extreme events. Convergence trends can be diagnostic of the nature of an underlying distribution and its sampling even before convergence has occurred. We examine the evolution of an information criteria metric applied to earthquake magnitude time series, in order to test whether the Gutenberg-Richter law can be rejecting in various earthquake catalogues. This would imply that the catalogue is starting to sample roll-off in the tail though it cannot yet identify the form of the roll-off. We compare bootstrapped synthetic Gutenberg-Richter and synthetic modified Gutenberg-Richter catalogues with the convergence trends observed in real earthquake data e.g. the global CMT catalogue, Southern California and mining/geothermal data. Whilst convergence in the tail remains some way off, we show that the temporal evolution of model likelihoods and parameters for the frequency-magnitude distribution of the global Harvard Centroid Moment Tensor catalogue is inconsistent with an unbounded GR relation, despite it being the preferred model at the current time. Bell, A. F., M. Naylor, and I. G. Main (2013), Convergence of the frequency-size distribution of global earthquakes, Geophys. Res. Lett., 40, 2585-2589, doi:10.1002/grl.50416.

  20. Hybrid Modelling of the Economical Consequences of Extreme Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2013-05-01

    A hybrid modelling methodology is proposed to estimate the probability of exceedance of the intensities of extreme magnitude earthquakes (PEI) and of their direct economical consequences (PEDEC). The hybrid modeling uses 3D seismic wave propagation (3DWP) combined with empirical Green function (EGF) and Neural Network (NN) techniques in order to estimate the seismic hazard (PEIs) of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. The methodology is validated for Mw 8 magnitude subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican Pacific Coast. The results obtained with the proposed methodology, such as those of the PEDECs in terms of the joint event "damage Cost (C) - maximum ground intensities", of the conditional return period of C given that the maximum intensity exceeds a certain value, could be used by decision makers to allocate funds or to implement policies, to mitigate the impact associated to the plausible occurrence of future extreme magnitude earthquakes.

  1. Regional moment: Magnitude relations for earthquakes and explosions

    SciTech Connect

    Patton, H.J.; Walter, W.R. )

    1993-02-19

    The authors present M[sub o]:m[sub b] relations using m[sub b](P[sub n]) and m[sub b](L[sub g]) for earthquakes and explosions occurring in tectonic and stable areas. The observations for m[sub b](P[sub n]) range from about 3 to 6 and show excellent separation between earthquakes and explosions on M[sub o]:m[sub b] plots, independent of the magnitude. The scatter in M[sub o]:M[sub b] observations for NTS explosions is small compared to the earthquake data. The M[sub o]:m[sub b](L[sub g]) data for Soviet explosions overlay the observations for US explosions. These results, and the small scatter for NTS explosions, suggest weak dependence of M[sub o]:m[sub b] relations on emplacement media. A simple theoretical model is developed which matches all these observations. The model uses scaling similarity and conservation of energy to provide a physical link between seismic moment and a broadband seismic magnitude. Three factors, radiation pattern, material property, and apparent stress, contribute to the separation between earthquakes and explosions. This theoretical separation is independent of broadband magnitude. For US explosions in different media, the material property and apparent stress contributions are shown to compensate for one another, supporting the observations that M[sub o]:M[sub b] is nearly independent of source geology. 19 refs., 2 figs., 1 tab.

  2. Radiocarbon test of earthquake magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Atwater, B.F.; Stuiver, M.; Yamaguchi, D.K.

    1991-01-01

    THE Cascadia subduction zone, which extends along the northern Pacific coast of North America, might produce earthquakes of magnitude 8 or 9 ('great' earthquakes) even though it has not done so during the past 200 years of European observation 1-7. Much of the evidence for past Cascadia earthquakes comes from former meadows and forests that became tidal mudflats owing to abrupt tectonic subsidence in the past 5,000 years2,3,6,7. If due to a great earthquake, such subsidence should have extended along more than 100 km of the coast2. Here we investigate the extent of coastal subsidence that might have been caused by a single earthquake, through high-precision radiocarbon dating of coastal trees that abruptly subsided into the intertidal zone. The ages leave the great-earthquake hypothesis intact by limiting to a few decades the discordance, if any, in the most recent subsidence of two areas 55 km apart along the Washington coast. This subsidence probably occurred about 300 years ago.

  3. In Brief: China shaken by magnitude 7.9 earthquake

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2008-05-01

    A magnitude 7.9 earthquake that struck the eastern Sichuan region of China on 12 May 2008 at 0628 UTC has caused more than 22,000 fatalities as of press time, and Chinese government officials have indiciated that this figure could increase to 50,000. The quake also caused severe damage including landslides and cracks to 391 mostly small dams, according to an Associated Press report that cited the Xinhua News Agency and CCTV news. China's Ministry of Water Resources has dispatched several work teams to quake-hit localities ``to prevent dams that were damaged by the earthquake from bursting and endangering the lives of residents,'' the ministry stated.

  4. Earthquake magnitude calculation without saturation from the scaling of peak ground displacement

    NASA Astrophysics Data System (ADS)

    Melgar, Diego; Crowell, Brendan W.; Geng, Jianghui; Allen, Richard M.; Bock, Yehuda; Riquelme, Sebastian; Hill, Emma M.; Protti, Marino; Ganas, Athanassios

    2015-07-01

    GPS instruments are noninertial and directly measure displacements with respect to a global reference frame, while inertial sensors are affected by systematic offsets—primarily tilting—that adversely impact integration to displacement. We study the magnitude scaling properties of peak ground displacement (PGD) from high-rate GPS networks at near-source to regional distances (~10-1000 km), from earthquakes between Mw6 and 9. We conclude that real-time GPS seismic waveforms can be used to rapidly determine magnitude, typically within the first minute of rupture initiation and in many cases before the rupture is complete. While slower than earthquake early warning methods that rely on the first few seconds of P wave arrival, our approach does not suffer from the saturation effects experienced with seismic sensors at large magnitudes. Rapid magnitude estimation is useful for generating rapid earthquake source models, tsunami prediction, and ground motion studies that require accurate information on long-period displacements.

  5. Multifractal detrended fluctuation analysis of Pannonian earthquake magnitude series

    NASA Astrophysics Data System (ADS)

    Telesca, Luciano; Toth, Laszlo

    2016-04-01

    The multifractality of the series of magnitudes of the earthquakes occurred in Pannonia region from 2002 to 2012 has been investigated. The shallow (depth less than 40 km) and deep (depth larger than 70 km) seismic catalogues were analysed by using the multifractal detrended fluctuation analysis. The shallow and deep catalogues are characterized by different multifractal properties: (i) the magnitudes of the shallow events are weakly persistent, while those of the deep ones are almost uncorrelated; (ii) the deep catalogue is more multifractal than the shallow one; (iii) the magnitudes of the deep catalogue are characterized by a right-skewed multifractal spectrum, while that of the shallow magnitude is rather symmetric; (iv) a direct relationship between the b-value of the Gutenberg-Richter law and the multifractality of the magnitudes is suggested.

  6. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion. PMID:24458636

  7. Nonlinear site response in medium magnitude earthquakes near Parkfield, California

    USGS Publications Warehouse

    Rubinstein, Justin L.

    2011-01-01

    Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).

  8. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  9. Exaggerated Claims About Earthquake Predictions

    NASA Astrophysics Data System (ADS)

    Kafka, Alan L.; Ebel, John E.

    2007-01-01

    The perennial promise of successful earthquake prediction captures the imagination of a public hungry for certainty in an uncertain world. Yet, given the lack of any reliable method of predicting earthquakes [e.g., Geller, 1997; Kagan and Jackson, 1996; Evans, 1997], seismologists regularly have to explain news stories of a supposedly successful earthquake prediction when it is far from clear just how successful that prediction actually was. When journalists and public relations offices report the latest `great discovery' regarding the prediction of earthquakes, seismologists are left with the much less glamorous task of explaining to the public the gap between the claimed success and the sober reality that there is no scientifically proven method of predicting earthquakes.

  10. Regional moment - Magnitude relations for earthquakes and explosions

    NASA Astrophysics Data System (ADS)

    Patton, Howard J.; Walter, William R.

    1993-02-01

    We present M sub o:m sub b relations using m sub b (P sub n) and m sub b (L sub g) for earthquakes and explosions occurring in tectonic and stable areas. The observations for m sub b (P sub n) range from about 3 to 6 and show excellent separation between earthquakes and explosions on M sub o:m sub b plots, independent of the magnitude. The scatter in M sub o:m sub b observations for NTS explosions is small compared to the earthquake data. The M sub o:m sub b (L sub g) data for Soviet explosions overlay the observations for U.S. explosions. These results, and the small scatter for NTS explosions, suggest weak dependence of M sub o:m sub b relations on emplacement media. A simple theoretical model is developed which matches all these observations. The model uses scaling similarity and conservation of energy to provide a physical link between seismic moment and a broadband seismic magnitude. For U.S. explosions in different media, the material property and apparent stress contributions are shown to compensate for one another, supporting the observations that M sub o:m sub b is nearly independent of source geology.

  11. Does low magnitude earthquake ground shaking cause landslides?

    NASA Astrophysics Data System (ADS)

    Brain, Matthew; Rosser, Nick; Vann Jones, Emma; Tunstall, Neil

    2015-04-01

    Estimating the magnitude of coseismic landslide strain accumulation at both local and regional scales is a key goal in understanding earthquake-triggered landslide distributions and landscape evolution, and in undertaking seismic risk assessment. Research in this field has primarily been carried out using the 'Newmark sliding block method' to model landslide behaviour; downslope movement of the landslide mass occurs when seismic ground accelerations are sufficient to overcome shear resistance at the landslide shear surface. The Newmark method has the advantage of simplicity, requiring only limited information on material strength properties, landslide geometry and coseismic ground motion. However, the underlying conceptual model assumes that shear strength characteristics (friction angle and cohesion) calculated using conventional strain-controlled monotonic shear tests are valid under dynamic conditions, and that values describing shear strength do not change as landslide shear strain accumulates. Recent experimental work has begun to question these assumptions, highlighting, for example, the importance of shear strain rate and changes in shear strength properties following seismic loading. However, such studies typically focus on a single earthquake event that is of sufficient magnitude to cause permanent strain accumulation; by doing so, they do not consider the potential effects that multiple low-magnitude ground shaking events can have on material strength. Since such events are more common in nature relative to high-magnitude shaking events, it is important to constrain their geomorphic effectiveness. Using an experimental laboratory approach, we present results that address this key question. We used a bespoke geotechnical testing apparatus, the Dynamic Back-Pressured Shear Box (DynBPS), that uniquely permits more realistic simulation of earthquake ground-shaking conditions within a hillslope. We tested both cohesive and granular materials, both of which

  12. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  13. Predicting Predictable: Accuracy and Reliability of Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2014-12-01

    Earthquake forecast/prediction is an uncertain profession. The famous Gutenberg-Richter relationship limits magnitude range of prediction to about one unit. Otherwise, the statistics of outcomes would be related to the smallest earthquakes and may be misleading when attributed to the largest earthquakes. Moreover, the intrinsic uncertainty of earthquake sizing allows self-deceptive picking of justification "just from below" the targeted magnitude range. This might be important encouraging evidence but, by no means, can be a "helpful" additive to statistics of a rigid testing that determines reliability and efficiency of a farecast/prediction method. Usually, earthquake prediction is classified in respect to expectation time while overlooking term-less identification of earthquake prone areas, as well as spatial accuracy. The forecasts are often made for a "cell" or "seismic region" whose area is not linked to the size of target earthquakes. This might be another source for making a wrong choice in parameterization of an forecast/prediction method and, eventually, for unsatisfactory performance in a real-time application. Summing up, prediction of time and location of an earthquake of a certain magnitude range can be classified into categories listed in the Table below - Classification of earthquake prediction accuracy Temporal, in years Spatial, in source zone size (L) Long-term 10 Long-range Up to 100 Intermediate-term 1 Middle-range 5-10 Short-term 0.01-0.1 Narrow-range 2-3 Immediate 0.001 Exact 1 Note that a wide variety of possible combinations that exist is much larger than usually considered "short-term exact" one. In principle, such an accurate statement about anticipated seismic extreme might be futile due to the complexities of the Earth's lithosphere, its blocks-and-faults structure, and evidently nonlinear dynamics of the seismic process. The observed scaling of source size and preparation zone with earthquake magnitude implies exponential scales for

  14. Functional shape of the earthquake frequency-magnitude distribution and completeness magnitude

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2012-08-01

    We investigated the functional shape of the earthquake frequency-magnitude distribution (FMD) to identify its dependence on the completeness magnitude Mc. The FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the event number, m the magnitude, exp(-βm) the Gutenberg-Richter law and q(m) a detection function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature of bulk FMDs. Recent results however suggest that this gradual curvature is due to Mc heterogeneities, meaning that the functional shape of the elemental FMD has yet to be described. We propose a detection function of the form q(m) = exp(κ(m - Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models are compared in earthquake catalogs from Southern California and Nevada and in synthetic catalogs. We show that the angular FMD model better describes the elemental FMD and that the sum of elemental angular FMDs leads to the gradually curved bulk FMD. We propose an FMD shape ontology consisting of 5 categories depending on the Mc spatial distribution, from Mc constant to Mc highly heterogeneous: (I) Angular FMD, (II) Intermediary FMD, (III) Intermediary FMD with multiple maxima, (IV) Gradually curved FMD and (V) Gradually curved FMD with multiple maxima. We also demonstrate that the gradually curved FMD model overestimates Mc. This study provides new insights into earthquake detectability properties by using seismicity as a proxy and the means to accurately estimate Mc in any given volume.

  15. VLF study of low magnitude Earthquakes (4.5

    NASA Astrophysics Data System (ADS)

    Wolbang, Daniel; Biernat, Helfried; Schwingenschuh, Konrad; Eichelberger, Hans; Prattes, Gustav; Besser, Bruno; Boudjada, Mohammed; Rozhnoi, Alexander; Solovieva, Maria; Biagi, Pier Francesco; Friedrich, Martin

    2014-05-01

    In the course of the European VLF/LF radio receiver network (International Network for Frontier Research on Earthquake Precursors, INFREP), radio signals in the frequency range from 10-50 kilohertz are received, continuously recorded (temporal resolution 20 seconds) and analyzed in the Graz/Austria knot. The radio signals are generated by dedicated distributed transmitters and detected by INFREP receivers in Europe. In case the signal is crossing an earthquake preparation zone, we are in principle able to detect seismic activity if the signal to noise ratio is high enough. The requirements to detect a seismic event with the radio link methods are given by the magnitude M of the Earthquake (EQ), the EQ preparation zone and the Fresnel zone. As pointed out by Rozhnoi et al. (2009), the VLF methods are suitable for earthquakes M>5.0. Furthermore, the VLF/LF radio link gets only disturbed if it is crossing the EQ preparation zone which is described by Molchanov et al. (2008). In the frame of this project I analyze low seismicity EQs (M≤5.6) in south/eastern Europe in the time period 2011-2013. My emphasis is on two seismic events with magnitudes 5.6 and 4.8 which we are not able to adequately characterize using our single parameter VLF method. I perform a fine structure analysis of the residua of various radio links crossing the area around the particular 2 EQs. Depending on the individual paths not all radio links are crossing the EQ preparation zone directly, so a comparative study is possible. As a comparison I analyze with the same method the already good described EQ of L'Aquila/Italy in 2009 with M=6.3 and radio links which are crossing directly the EQ preparation zone. In the course of this project we try to understand in more detail why it is so difficult to detect EQs with 4.5

  16. Ground Motion Characteristics Considering Magnitude Dependency and Difference Between Surface and Subsurface Rupture Earthquakes

    NASA Astrophysics Data System (ADS)

    Kagawa, T.; Irikura, K.; Some, P. G.; Miyake, H.; Sato, T.; Dan, K.; Matsu, S.

    2005-12-01

    We have studied differences in ground motion according to fault rupture types and magnitude. We found that three diffferent earthquake categories have distinct ground motion characteristics. Somerville (2003) and Kagawa et al. (2004) found that the ground motion caused by subsurface rupture in the period range around one second is larger than predicted by empirical spectral attenuation relations (Abrahamson and Silva, 1997) for all earthquakes, but ground motion from earthquakes that rupture the surface is smaller in the same period range. We expand their study to smaller earthquakes and add several recent earthquakes. WWe began by dividing the earthquakes into four categories that are a combination of two classifications, i.e. defined and undefined fault, surface and subsurface rupture earthquakes. Each category is divided into larger and smaller earthquakes than about Mw 6.5. Eventually, we classified the earthquakes into three groups: a) Surface rupture type : Ground motion is smaller than average, especially in the period range around 1 second. b) Larger subsurface rupture type : Ground motion is larger than average, especially in the period range around 1 second. c) Smaller subsurface rupture type : Ground motion is larger than average, especially in the period range around 0.1 second. Subsurface rupture earthquakes with small magnitude occur in the deep portion of the seismogenic zone. Deep and high stress asperities generate large ground motions in the short period range. They do not generate pulse like ground motions, because the asperity is too small and deep to cause forward directivity effects, and because the radiation and propagation of ground motion at short periods may be too incoherent to allow the formation of a pulse. Larger subsurface rupture earthquakes have larger asperities that may span a large part of the width of the seismogenic zone, producing coherent directivity pulses with periods of 1 second or more. Kagawa et al. (2004) pointed out

  17. Magnitude and location of historical earthquakes in Japan and implications for the 1855 Ansei Edo earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    2005-01-01

    Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.

  18. Geochemical challenge to earthquake prediction.

    PubMed Central

    Wakita, H

    1996-01-01

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665

  19. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  20. Magnitude-frequency relations for earthquakes using a statistical mechanical approach

    SciTech Connect

    Rundle, J.B.

    1993-12-10

    At very small magnitudes, observations indicate that the frequency of occurrence of earthquakes is significantly smaller than the frequency predicted by simple Gutenberg-Richter statistics. Previously, it has been suggested that the dearth of small events is related to a rapid rise in scattering and attenuation at high frequencies and the consequent inability to detect these events with standard arrays of seismometers. However, several recent studies have suggested that instrumentation cannot account for the entire effect and that the decline in frequency may be real. Working from this hypothesis, we derive a magnitude-frequency relation for very small earthquakes that is based upon the postulate that the system of moving plates can be treated as a system not too far removed from equilibrium. As a result, it is assumed that in the steady state, the probability P[E] that a segment of fault has a free energy E is proportional to the exponential of the free energy P {proportional_to} exp[-E / E{sub N}]. In equilibrium statistical mechanics this distribution is called the Boltzmann distribution. The probability weight E{sub N} is the space-time steady state average of the free energy of the segment. Earthquakes are then treated as fluctuations in the free energy of the segments. With these assumptions, it is shown that magnitude-frequency relations can be obtained. For example, previous results obtained by the author can be recovered under the same assumptions as before, for intermediate and large events, the distinction being whether the event is of a linear dimension sufficient to extend the entire width of the brittle zone. Additionally, a magnitude-frequency relation is obtained that is in satisfactory agreement with the data at very small magnitudes. At these magnitudes, departures from frequencies predicted by Gutenberg-Richter statistics are found using a model that accounts for the finite thickness of the inelastic part of the fault zone.

  1. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  2. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  3. Brief communication "The magnitude 7.2 Bohol earthquake, Philippines"

    NASA Astrophysics Data System (ADS)

    Lagmay, A. M. F.; Eco, R.

    2014-03-01

    A devastating earthquake struck Bohol, Philippines on 15 October 2013. The earthquake originated at 12 km depth from an unmapped reverse fault, which manifested on the surface for several kilometers and with maximum vertical displacement of 3 m. The earthquake resulted in 222 fatalities with damage to infrastructure estimated at US52.06 million. Widespread landslides and sinkholes formed in the predominantly limestone region during the earthquake. These remain a significant threat to communities as destabilized hillside slopes, landslide-dammed rivers and incipient sinkholes are still vulnerable to collapse, triggered possibly by aftershocks and heavy rains in the upcoming months of November and December.

  4. Modeling Time Dependent Earthquake Magnitude Distributions Associated with Injection-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Maurer, J.; Segall, P.

    2015-12-01

    Understanding and predicting earthquake magnitudes from injection-induced seismicity is critically important for estimating hazard due to injection operations. A particular problem has been that the largest event often occurs post shut-in. A rigorous analysis would require modeling all stages of earthquake nucleation, propagation, and arrest, and not just initiation. We present a simple conceptual model for predicting the distribution of earthquake magnitudes during and following injection, building on the analysis of Segall & Lu (2015). The analysis requires several assumptions: (1) the distribution of source dimensions follows a Gutenberg-Richter distribution; (2) in environments where the background ratio of shear to effective normal stress is low, the size of induced events is limited by the volume perturbed by injection (e.g., Shapiro et al., 2013; McGarr, 2014), and (3) the perturbed volume can be approximated by diffusion in a homogeneous medium. Evidence for the second assumption comes from numerical studies that indicate the background ratio of shear to normal stress controls how far an earthquake rupture, once initiated, can grow (Dunham et al., 2011; Schmitt et al., submitted). We derive analytical expressions that give the rate of events of a given magnitude as the product of three terms: the time-dependent rate of nucleations, the probability of nucleating on a source of given size (from the Gutenberg-Richter distribution), and a time-dependent geometrical factor. We verify our results using simulations and demonstrate characteristics observed in real induced sequences, such as time-dependent b-values and the occurrence of the largest event post injection. We compare results to Segall & Lu (2015) as well as example datasets. Future work includes using 2D numerical simulations to test our results and assumptions; in particular, investigating how background shear stress and fault roughness control rupture extent.

  5. Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock

    NASA Astrophysics Data System (ADS)

    Shcherbakov, R.

    2014-12-01

    Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and

  6. Earthquakes clustering based on the magnitude and the depths in Molluca Province

    SciTech Connect

    Wattimanela, H. J.; Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T.

    2015-12-22

    In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province.

  7. Rapid earthquake magnitude from real-time GPS precise point positioning for earthquake early warning and emergency response

    NASA Astrophysics Data System (ADS)

    Fang, Rongxin; Shi, Chuang; Song, Weiwei; Wang, Guangxing; Liu, Jingnan

    2014-05-01

    For earthquake early warning (EEW) and emergency response, earthquake magnitude is the crucial parameter to be determined rapidly and correctly. However, a reliable and rapid measurement of the magnitude of an earthquake is a challenging problem, especially for large earthquakes (M>8). Here, the magnitude is determined based on the GPS displacement waveform derived from real-time precise point positioning (PPP). The real-time PPP results are evaluated with an accuracy of 1 cm in the horizontal components and 2-3 cm in the vertical components, indicating that the real-time PPP is capable of detecting seismic waves with amplitude of 1cm horizontally and 2-3cm vertically with a confidence level of 95%. In order to estimate the magnitude, the unique information provided by the GPS displacement waveform is the horizontal peak displacement amplitude. We show that the empirical relation of Gutenberg (1945) between peak displacement and magnitude holds up to nearly magnitude 9.0 when displacements are measured with GPS. We tested the proposed method for three large earthquakes. For the 2010 Mw 7.2 El Mayor-Cucapah earthquake, our method provides a magnitude of M7.18±0.18. For the 2011 Mw 9.0 Tohoku-oki earthquake the estimated magnitude is M8.74±0.06, and for the 2010 Mw 8.8 Maule earthquake the value is M8.7±0.1 after excluding some near-field stations. We therefore conclude that depending on the availability of high-rate GPS observations, a robust value of magnitude up to 9.0 for a point source earthquake can be estimated within 10s of seconds or a few minutes after an event using a few GPS stations close to the epicenter. The rapid magnitude could be as a pre-requisite for tsunami early warning, fast source inversion, and emergency response is feasible.

  8. Earthquake prediction: Simple methods for complex phenomena

    NASA Astrophysics Data System (ADS)

    Luen, Bradley

    2010-09-01

    Earthquake predictions are often either based on stochastic models, or tested using stochastic models. Tests of predictions often tacitly assume predictions do not depend on past seismicity, which is false. We construct a naive predictor that, following each large earthquake, predicts another large earthquake will occur nearby soon. Because this "automatic alarm" strategy exploits clustering, it succeeds beyond "chance" according to a test that holds the predictions _xed. Some researchers try to remove clustering from earthquake catalogs and model the remaining events. There have been claims that the declustered catalogs are Poisson on the basis of statistical tests we show to be weak. Better tests show that declustered catalogs are not Poisson. In fact, there is evidence that events in declustered catalogs do not have exchangeable times given the locations, a necessary condition for the Poisson. If seismicity followed a stochastic process, an optimal predictor would turn on an alarm when the conditional intensity is high. The Epidemic-Type Aftershock (ETAS) model is a popular point process model that includes clustering. It has many parameters, but is still a simpli_cation of seismicity. Estimating the model is di_cult, and estimated parameters often give a non-stationary model. Even if the model is ETAS, temporal predictions based on the ETAS conditional intensity are not much better than those of magnitude-dependent automatic (MDA) alarms, a much simpler strategy with only one parameter instead of _ve. For a catalog of Southern Californian seismicity, ETAS predictions again o_er only slight improvement over MDA alarms

  9. Microearthquake networks and earthquake prediction

    USGS Publications Warehouse

    Lee, W.H.K.; Steward, S. W.

    1979-01-01

    A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.  

  10. An Exponential Detection Function to Describe Earthquake Frequency-Magnitude Distributions Below Completeness

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2011-12-01

    The capacity of a seismic network to detect small earthquakes can be evaluated by investigating the shape of the Frequency-Magnitude Distribution (FMD) of the resultant earthquake catalogue. The non-cumulative FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the number of events of magnitude m, exp(-βm) the Gutenberg-Richter law and q(m) a probability function. I propose an exponential detection function of the form q(m) = exp(κ(m-Mc)) for m < Mc with Mc the magnitude of completeness, magnitude at which N(m) is maximal. With Mc varying in space due to the heterogeneous distribution of seismic stations in a network, the bulk FMD of an earthquake catalogue corresponds to the sum of local FMDs with respective Mc(x,y), which leads to the gradual curvature of the bulk FMD below max(Mc(x,y)). More complicated FMD shapes are expected if the catalogue is derived from multiple network configurations. The model predictions are verified in the case of Southern California and Nevada. Only slight variations of the detection parameter k = κ/ln(10) are observed within a given region, with k = 3.84 ± 0.66 for Southern California and k = 2.84 ± 0.77 for Nevada, assuming Mc constant in 2° by 2° cells. Synthetic catalogues, which follow the exponential model, can reproduce reasonably well the FMDs observed for Southern California and Nevada by using only c. 15% of the total number of observed events. The proposed model has important implications in Mc mapping procedures and allows use of the full magnitude range for subsequent seismicity analyses.

  11. Earthquake prediction with electromagnetic phenomena

    NASA Astrophysics Data System (ADS)

    Hayakawa, Masashi

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  12. Dim prospects for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  13. Demographic factors predict magnitude of conditioned fear.

    PubMed

    Rosenbaum, Blake L; Bui, Eric; Marin, Marie-France; Holt, Daphne J; Lasko, Natasha B; Pitman, Roger K; Orr, Scott P; Milad, Mohammed R

    2015-10-01

    There is substantial variability across individuals in the magnitudes of their skin conductance (SC) responses during the acquisition and extinction of conditioned fear. To manage this variability, subjects may be matched for demographic variables, such as age, gender and education. However, limited data exist addressing how much variability in conditioned SC responses is actually explained by these variables. The present study assessed the influence of age, gender and education on the SC responses of 222 subjects who underwent the same differential conditioning paradigm. The demographic variables were found to predict a small but significant amount of variability in conditioned responding during fear acquisition, but not fear extinction learning or extinction recall. A larger differential change in SC during acquisition was associated with more education. Older participants and women showed smaller differential SC during acquisition. Our findings support the need to consider age, gender and education when studying fear acquisition but not necessarily when examining fear extinction learning and recall. Variability in demographic factors across studies may partially explain the difficulty in reproducing some SC findings. PMID:26151498

  14. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  15. Determination of earthquake magnitude using GPS displacement waveforms from real-time precise point positioning

    NASA Astrophysics Data System (ADS)

    Fang, Rongxin; Shi, Chuang; Song, Weiwei; Wang, Guangxing; Liu, Jingnan

    2014-01-01

    For earthquake and tsunami early warning and emergency response, earthquake magnitude is the crucial parameter to be determined rapidly and correctly. However, a reliable and rapid measurement of the magnitude of an earthquake is a challenging problem, especially for large earthquakes (M > 8). Here, the magnitude is determined based on the GPS displacement waveform derived from real-time precise point positioning (RTPPP). RTPPP results are evaluated with an accuracy of 1 cm in the horizontal components and 2-3 cm in the vertical components, indicating that the RTPPP is capable of detecting seismic waves with amplitude of 1 cm horizontally and 2-3 cm vertically with a confidence level of 95 per cent. In order to estimate the magnitude, the unique information provided by the GPS displacement waveform is the horizontal peak displacement amplitude. We show that the empirical relation of Gutenberg (1945) between peak displacement and magnitude holds up to nearly magnitude 9.0 when displacements are measured with GPS. We tested the proposed method for three large earthquakes. For the 2010 Mw 7.2 El Mayor-Cucapah earthquake, our method provides a magnitude of M7.18 ± 0.18. For the 2011 Mw 9.0 Tohoku-oki earthquake the estimated magnitude is M8.74 ± 0.06, and for the 2010 Mw 8.8 Maule earthquake the value is M8.7 ± 0.1 after excluding some near-field stations. We, therefore, conclude that depending on the availability of high-rate GPS observations, a robust value of magnitude up to 9.0 for a point source earthquake can be estimated within tens of seconds or a few minutes after an event using a few GPS stations close to the epicentre. The rapid magnitude could be as a pre-requisite for tsunami early warning, fast source inversion and emergency response is feasible.

  16. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  17. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental

  18. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  19. Efficiency test of earthquake prediction around Thessaloniki from electrotelluric precursors

    NASA Astrophysics Data System (ADS)

    Meyer, K.; Varotsos, P.; Alexopoulos, K.; Nomicos, K.

    1985-11-01

    Since the completion of the network in January 1983, the electric field of the earth has been continuously monitored at four sites near Thessaloniki, the capital of northern Greece. From the present study and from previous investigations by similar measurements in Greece, it is evident that transient changes of the electrotelluric field occur prior to earthquakes. The analysis of these electric forerunners leads in many cases to a successful prediction of the epicentral area, the magnitude and the time of the impending event. Predictions prior to regional earthquakes are issued and documented with telegrams. From November 1983 until the end of May 1984 twelve earthquakes ( M L > 3.5 ) occurred in the vicinity of Thessaloniki. Ten of these were predicted and warnings given by telegram, whereas two smaller seismic events were missed. Two additional predictions were unsuccessful. Independent of their magnitudes, predicted events took place within a time window of 6 hrs to 6 days after the observations of the electrotelluric anomalies. The accuracy of the predicted epicenters in eight cases is better than 100 km, which corresponds roughly to the mean distance between the electric stations. Magnitude estimates deviate by less than 0.5 magnitude units from the seismically observed ones. Considering the two largest earthquakes, it is shown that the probability of making each of these predictions by chance is of the order of 10 -2.

  20. Source time function properties indicate a strain drop independent of earthquake depth and magnitude

    NASA Astrophysics Data System (ADS)

    Vallee, Martin

    2014-05-01

    Movement of the tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other one. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, i.e. the ratio of seismic slip over the dimension of the ruptured fault. SCARDEC, a recently developed method, gives access to this information through the systematic determination of earthquakes source time functions (STFs). STFs describe the integrated spatio-temporal history of the earthquake process, and their maximum value can be related to the amount of stress or strain released during the earthquake. Here I analyse all earthquakes with magnitudes greater than 6 occurring in the last 20 years, and thus provide a catalogue of 1700 STFs which sample all the possible seismic depths. Analysis of this new database reveals that the strain drop remains on average the same for all earthquakes, independent of magnitude and depth. In other words, it is shown that, independent of the earthquake depth, magnitude 6 and larger earthquakes keep on average a similar ratio between seismic slip and dimension of the main slip patch. This invariance implies that deep earthquakes are even more similar than previously thought to their shallow counterparts, a puzzling finding as shallow and deep earthquakes should originate from different physical mechanisms. Concretely, the ratio between slip and patch dimension is on the order of 10-5-10-4, with extreme values only 8 times lower or larger at the 95% confidence interval. Besides the implications for mechanisms of deep earthquake generation, this limited variability has practical implications for realistic earthquake scenarios.

  1. The Magnitude 6.7 Northridge, California, Earthquake of January 17, 1994

    NASA Technical Reports Server (NTRS)

    Donnellan, A.

    1994-01-01

    The most damaging earthquake in the United States since 1906 struck northern Los Angeles on January 17.1994. The magnitude 6.7 Northridge earthquake produced a maximum of more than 3 meters of reverse (up-dip) slip on a south-dipping thrust fault rooted under the San Fernando Valley and projecting north under the Santa Susana Mountains.

  2. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  3. Earthquake prediction decision and risk matrix

    NASA Astrophysics Data System (ADS)

    Zou, Qi-Jia

    1993-08-01

    The issuance of an earthquake prediction must cause widespread social responses. It is suggested and discussed in this paper that the comprehensive decision issue for earthquake prediction considering the factors of the social and economic cost. The method of matrix decision for earthquake prediction (MDEP) is proposed in this paper and it is based on the risk matrix. The goal of decision is that search the best manner issuing earthquake prediction so that minimize the total losses of economy. The establishment and calculation of the risk matrix is discussed, and the decision results taking account of economic factors and not considering the economic factors are compared by examples in this paper.

  4. Chile2015: Lévy Flight and Long-Range Correlation Analysis of Earthquake Magnitudes in Chile

    NASA Astrophysics Data System (ADS)

    Beccar-Varela, Maria P.; Gonzalez-Huizar, Hector; Mariani, Maria C.; Serpa, Laura F.; Tweneboah, Osei K.

    2016-06-01

    The stochastic Truncated Lévy Flight model and detrended fluctuation analysis (DFA) are used to investigate the temporal distribution of earthquake magnitudes in Chile. We show that Lévy Flight is appropriated for modeling the time series of the magnitudes of the earthquakes. Furthermore, DFA shows that these events present memory effects, suggesting that the magnitude of impeding earthquakes depends on the magnitude of previous earthquakes. Based on this dependency, we use a non-linear regression to estimate the magnitude of the 2015, M8.3 Illapel earthquake based on the magnitudes of the previous events.

  5. Chile2015: Lévy Flight and Long-Range Correlation Analysis of Earthquake Magnitudes in Chile

    NASA Astrophysics Data System (ADS)

    Beccar-Varela, Maria P.; Gonzalez-Huizar, Hector; Mariani, Maria C.; Serpa, Laura F.; Tweneboah, Osei K.

    2016-07-01

    The stochastic Truncated Lévy Flight model and detrended fluctuation analysis (DFA) are used to investigate the temporal distribution of earthquake magnitudes in Chile. We show that Lévy Flight is appropriated for modeling the time series of the magnitudes of the earthquakes. Furthermore, DFA shows that these events present memory effects, suggesting that the magnitude of impeding earthquakes depends on the magnitude of previous earthquakes. Based on this dependency, we use a non-linear regression to estimate the magnitude of the 2015, M8.3 Illapel earthquake based on the magnitudes of the previous events.

  6. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  7. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  8. Maximum Magnitude and Recurrence Interval for the Large Earthquakes in the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Hu, C.

    2012-12-01

    Maximum magnitude and recurrence interval of the large earthquakes are key parameters for seismic hazard assessment in the central and eastern United States. Determination of these two parameters is quite difficult in the region, however. For example, the estimated maximum magnitudes of the 1811-12 New Madrid sequence are in the range of M6.6 to M8.2, whereas the estimated recurrence intervals are in the range of about 500 to several thousand years. These large variations of maximum magnitude and recurrence interval for the large earthquakes lead to significant variation of estimated seismic hazards in the central and eastern United States. There are several approaches being used to estimate the magnitudes and recurrence intervals, such as historical intensity analysis, geodetic data analysis, and paleo-seismic investigation. We will discuss the approaches that are currently being used to estimate maximum magnitude and recurrence interval of the large earthquakes in the central United States.

  9. A geometric frequency-magnitude scaling transition: Measuring b = 1.5 for large earthquakes

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Holliday, James R.; Turcotte, Donald L.; Rundle, John B.

    2012-04-01

    We identify two distinct scaling regimes in the frequency-magnitude distribution of global earthquakes. Specifically, we measure the scaling exponent b = 1.0 for "small" earthquakes with 5.5 < m < 7.6 and b = 1.5 for "large" earthquakes with 7.6 < m < 9.0. This transition at mt = 7.6, can be explained by geometric constraints on the rupture. In conjunction with supporting literature, this corroborates theories in favor of fully self-similar and magnitude independent earthquake physics. We also show that the scaling behavior and abrupt transition between the scaling regimes imply that earthquake ruptures have compact shapes and smooth rupture-fronts.

  10. Maximum earthquake magnitudes along different sections of the North Anatolian fault zone

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda

    2016-04-01

    Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.

  11. Determination of magnitude and epicenter of historical earthquakes on the Trans Mexican Volcanic Belt

    NASA Astrophysics Data System (ADS)

    Suarez, G.; Jiménez, G.

    2013-12-01

    Two large earthquakes occurred in the Trans Mexican Volcanic Belt (TMVB) in the XXth century. A Mw 6.9 earthquake took place near the town of Acambay in 1912 and in 1920 an event near the city of Jalapa had a magnitude of Mw 6.4. Both events took place in the crust and reflect the tectonic deformation of the TMVB. In addition to these two instrumental earthquakes, the historical record in Mexico, which spans approximately the past 450 years, has a large volume of macroseismic information suggesting the presence crustal earthquakes similar to those that took place in 1912 and 1920. The catalog of macroseismic data in Mexico was carefully reviewed, searching for the presence of crustal events in the TMVB. In total, twelve potential earthquakes were identified. The data was geo-referenced, a magnitude was assigned in the Modified Mercalli Scale (MMS) and events were collated based on the dates reported by the references. The method developed by Bakun and Wentworth (1997) was used to estimate the magnitude and epicentral location of these historical earthquakes. Considering that only two instrumental earthquakes of similar magnitudes exist, it was not possible to construct an attenuation calibration curve of magnitude versus distance. Instead, several published attenuation curves were used. The calibration curve determined for California yielded the best results for both magnitude and epicentral location for the XXth century events. Using this calibration curve, the magnitude and location of several historical events was determined. Our results indicate that over the past 450 years, at least six earthquakes larger than magnitude M 6 have occurred on the TMVB. Three of these, the earthquakes of 1568, 1858 and 1875, appear to have a magnitude larger than M 7. Furthermore, the distribution of these historical earthquakes spans the TMVB in its entirety, and is not restricted to specific areas. The presence of these relatively large, crustal events that take place near the

  12. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  13. The magnitude 6.7 Northridge, California, earthquake of 17 January 1994

    USGS Publications Warehouse

    Jones, L.; Aki, K.; Boore, D.; Celebi, M.; Donnellan, A.; Hall, J.; Harris, R.; Hauksson, E.; Heaton, T.; Hough, S.; Hudnut, K.; Hutton, K.; Johnston, M.; Joyner, W.; Kanamori, H.; Marshall, G.; Michael, A.; Mori, J.; Murray, M.; Ponti, D.; Reasenberg, P.; Schwartz, D.; Seeber, L.; Shakal, A.; Simpson, R.; Thio, H.; Tinsley, J.; Todorovska, M.; Trifunac, M.; Wald, D.; Zoback, M.L.

    1994-01-01

    The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.

  14. Earthquake Prediction: Is It Better Not to Know?

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Discusses economic, social and political consequences of earthquake prediction. Reviews impact of prediction on China's recent (February, 1975) earthquake. Diagrams a chain of likely economic consequences from predicting an earthquake. (CS)

  15. Coseismic and postseismic slip of the 2011 magnitude-9 Tohoku-Oki earthquake.

    PubMed

    Ozawa, Shinzaburo; Nishimura, Takuya; Suito, Hisashi; Kobayashi, Tomokazu; Tobita, Mikio; Imakiire, Tetsuro

    2011-07-21

    Most large earthquakes occur along an oceanic trench, where an oceanic plate subducts beneath a continental plate. Massive earthquakes with a moment magnitude, M(w), of nine have been known to occur in only a few areas, including Chile, Alaska, Kamchatka and Sumatra. No historical records exist of a M(w) = 9 earthquake along the Japan trench, where the Pacific plate subducts beneath the Okhotsk plate, with the possible exception of the ad 869 Jogan earthquake, the magnitude of which has not been well constrained. However, the strain accumulation rate estimated there from recent geodetic observations is much higher than the average strain rate released in previous interplate earthquakes. This finding raises the question of how such areas release the accumulated strain. A megathrust earthquake with M(w) = 9.0 (hereafter referred to as the Tohoku-Oki earthquake) occurred on 11 March 2011, rupturing the plate boundary off the Pacific coast of northeastern Japan. Here we report the distributions of the coseismic slip and postseismic slip as determined from ground displacement detected using a network based on the Global Positioning System. The coseismic slip area extends approximately 400 km along the Japan trench, matching the area of the pre-seismic locked zone. The afterslip has begun to overlap the coseismic slip area and extends into the surrounding region. In particular, the afterslip area reached a depth of approximately 100 km, with M(w) = 8.3, on 25 March 2011. Because the Tohoku-Oki earthquake released the strain accumulated for several hundred years, the paradox of the strain budget imbalance may be partly resolved. This earthquake reminds us of the potential for M(w) ≈ 9 earthquakes to occur along other trench systems, even if no past evidence of such events exists. Therefore, it is imperative that strain accumulation be monitored using a space geodetic technique to assess earthquake potential. PMID:21677648

  16. The Magnitude Frequency Distribution of Induced Earthquakes and Its Implications for Crustal Heterogeneity and Hazard

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.

    2015-12-01

    Earthquake activity in the central United States has increased dramatically since 2009, principally driven by injection of wastewater coproduced with oil and gas. The elevation of pore pressure from the collective influence of many disposal wells has created an unintended experiment that probes both the state of stress and architecture of the fluid plumbing and fault systems through the earthquakes it induces. These earthquakes primarily release tectonic stress rather than accommodation stresses from injection. Results to date suggest that the aggregated magnitude-frequency distribution (MFD) of these earthquakes differs from natural tectonic earthquakes in the same region for which the b-value is ~1.0. In Kansas, Oklahoma and Texas alone, more than 1100 earthquakes Mw ≥3 occurred between January 2014 and June 2015 but only 32 were Mw ≥ 4 and none were as large as Mw 5. Why is this so? Either the b-value is high (> 1.5) or the magnitude-frequency distribution (MFD) deviates from log-linear form at large magnitude. Where catalogs from local networks are available, such as in southern Kansas, b-values are normal (~1.0) for small magnitude events (M < 3). The deficit in larger-magnitude events could be an artifact of a short observation period, or could reflect a decreased potential for large earthquakes. According to the prevailing paradigm, injection will induce an earthquake when (1) the pressure change encounters a preexisting fault favorably oriented in the tectonic stress field; and (2) the pore-pressure perturbation at the hypocenter is sufficient to overcome the frictional strength of the fault. Most induced earthquakes occur where the injection pressure has attenuated to a small fraction of the seismic stress drop implying that the nucleation point was highly stressed. The population statistics of faults satisfying (1) could be the cause of this MFD if there are many small faults (dimension < 1 km) and few large ones in a critically stressed crust

  17. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  18. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  19. How to assess magnitudes of paleo-earthquakes from multiple observations

    NASA Astrophysics Data System (ADS)

    Hintersberger, Esther; Decker, Kurt

    2016-04-01

    An important aspect of fault characterisation regarding seismic hazard assessment are paleo-earthquake magnitudes. Especially in regions with low or moderate seismicity, paleo-magnitudes are normally much larger than those of historical earthquakes and therefore provide essential information about seismic potential and expected maximum magnitudes of a certain region. In general, these paleo-earthquake magnitudes are based either on surface rupture length or on surface displacement observed at trenching sites. Several well-established correlations provide the possibility to link the observed surface displacement to a certain magnitude. However, the combination of more than one observation is still rare and not well established. We present here a method based on a probabilistic approach proposed by Biasi and Weldon (2006) to combine several observations to better constrain the possible magnitude range of a paleo-earthquake. Extrapolating the approach of Biasi and Weldon (2006), the single-observation probability density functions (PDF) are assumed to be independent of each other. Following this line, the common PDF for all observed surface displacements generated by one earthquake is the product of all single-displacement PDFs. In order to test our method, we use surface displacement data for modern earthquakes, where magnitudes have been determined by instrumental records. For randomly selected "observations", we calculated the associated PDFs for each "observation point". We then combined the PDFs into one common PDF for an increasing number of "observations". Plotting the most probable magnitudes against the number of combined "observations", the resultant range of most probable magnitudes is very close to the magnitude derived by instrumental methods. Testing our method with real trenching observations, we used the results of a paleoseismological investigation within the Vienna Pull-Apart Basin (Austria), where three trenches were opened along the normal

  20. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not

  1. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)

    NASA Astrophysics Data System (ADS)

    Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-06-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding

  2. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  3. Predicting the endpoints of earthquake ruptures.

    PubMed

    Wesnousky, Steven G

    2006-11-16

    The active fault traces on which earthquakes occur are generally not continuous, and are commonly composed of segments that are separated by discontinuities that appear as steps in map-view. Stress concentrations resulting from slip at such discontinuities may slow or stop rupture propagation and hence play a controlling role in limiting the length of earthquake rupture. Here I examine the mapped surface rupture traces of 22 historical strike-slip earthquakes with rupture lengths ranging between 10 and 420 km. I show that about two-thirds of the endpoints of strike-slip earthquake ruptures are associated with fault steps or the termini of active fault traces, and that there exists a limiting dimension of fault step (3-4 km) above which earthquake ruptures do not propagate and below which rupture propagation ceases only about 40 per cent of the time. The results are of practical importance to seismic hazard analysis where effort is spent attempting to place limits on the probable length of future earthquakes on mapped active faults. Physical insight to the dynamics of the earthquake rupture process is further gained with the observation that the limiting dimension appears to be largely independent of the earthquake rupture length. It follows that the magnitude of stress changes and the volume affected by those stress changes at the driving edge of laterally propagating ruptures are largely similar and invariable during the rupture process regardless of the distance an event has propagated or will propagate. PMID:17108963

  4. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  5. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  6. Depth dependence of earthquake frequency-magnitude distributions in California: Implications for rupture initiation

    USGS Publications Warehouse

    Mori, J.; Abercrombie, R.E.

    1997-01-01

    Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.

  7. Fault-Zone Maturity Defines Maximum Earthquake Magnitude: The case of the North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Bulut, Fatih; Stierle, Eva; Martinez-Garzon, Patricia; Benzion, Yehuda

    2015-04-01

    Estimating the maximum likely magnitude of future earthquakes on transform faults near large metropolitan areas has fundamental consequences for the expected hazard. Here we show that the maximum earthquakes on different sections of the North Anatolian Fault Zone (NAFZ) scale with the duration of fault zone activity, cumulative offset and length of individual fault segments. The findings are based on a compiled catalogue of historical earthquakes in the region, using the extensive literary sources that exist due to the long civilization record. We find that the largest earthquakes (M~8) are exclusively observed along the well-developed part of the fault zone in the east. In contrast, the western part is still in a juvenile or transitional stage with historical earthquakes not exceeding M=7.4. This limits the current seismic hazard to NW Turkey and its largest regional population and economical center Istanbul. Our findings for the NAFZ are consistent with data from the two other major transform faults, the San Andreas fault in California and the Dead Sea Transform in the Middle East. The results indicate that maximum earthquake magnitudes generally scale with fault-zone evolution.

  8. Estimating locations and magnitudes of earthquakes in eastern North America from Modified Mercalli intensities

    USGS Publications Warehouse

    Bakun, W.H.; Johnston, A.C.; Hopper, M.G.

    2003-01-01

    We use 28 calibration events (3.7 ??? M ??? 7.3) from Texas to the Grand Banks, Newfoundland, to develop a Modified Mercalli intensity (MMI) model and associated site corrections for estimating source parameters of historical earthquakes in eastern North America. The model, MMI = 1.41 + 1.68 ?? M - 0.00345 ?? ?? - 2.08log (??), where ?? is the distance in kilometers from the epicenter and M is moment magnitude, provides unbiased estimates of M and its uncertainty, and, if site corrections are used, of source location. The model can be used for the analysis of historical earthquakes with only a few MMI assignments. We use this model, MMI site corrections, and Bakun and Wentworth's (1997 technique to estimate M and the epicenter for three important historical earthquakes. The intensity magnitude M1 is 6.1 for the 18 November 1755 earthquake near Cape Ann, Massachusetts; 6.0 for the 5 January 1843 earthquake near Marked Tree, Arkansas; and 6.0 for the 31 October 1895 earthquake. The 1895 event probably occurred in southern Illinois, about 100 km north of the site of significant ground failure effects near Charleston, Missouri.

  9. Earthquake source inversion of tsunami runup prediction

    NASA Astrophysics Data System (ADS)

    Sekar, Anusha

    Our goal is to study two inverse problems: using seismic data to invert for earthquake parameters and using tide gauge data to invert for earthquake parameters. We focus on the feasibility of using a combination of these inverse problems to improve tsunami runup prediction. A considerable part of the thesis is devoted to studying the seismic forward operator and its modeling using immersed interface methods. We develop an immersed interface method for solving the variable coefficient advection equation in one dimension with a propagating singularity and prove a convergence result for this method. We also prove a convergence result for the one-dimensional acoustic system of partial differential equations solved using immersed interface methods with internal boundary conditions. Such systems form the building blocks of the numerical model for the earthquake. For a simple earthquake-tsunami model, we observe a variety of possibilities in the recovery of the earthquake parameters and tsunami runup prediction. In some cases the data are insufficient either to invert for the earthquake parameters or to predict the runup. When more data are added, we are able to resolve the earthquake parameters with enough accuracy to predict the runup. We expect that this variety will be true in a real world three dimensional geometry as well.

  10. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake.

    PubMed

    Hill, D P; Reasenberg, P A; Michael, A; Arabaz, W J; Beroza, G; Brumbaugh, D; Brune, J N; Castro, R; Davis, S; Depolo, D; Ellsworth, W L; Gomberg, J; Harmsen, S; House, L; Jackson, S M; Johnston, M J; Jones, L; Keller, R; Malone, S; Munguia, L; Nava, S; Pechmann, J C; Sanford, A; Simpson, R W; Smith, R B; Stark, M; Stickney, M; Vidal, A; Walter, S; Wong, V; Zollweg, J

    1993-06-11

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma). PMID:17810202

  11. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake

    USGS Publications Warehouse

    Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.

    1993-01-01

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).

  12. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  13. Earthquake source inversion for moderate magnitude seismic events based on GPS simulated high-rate data

    NASA Astrophysics Data System (ADS)

    Psimoulis, Panos; Dalguer, Luis; Houlie, Nicolas; Zhang, Youbing; Clinton, John; Rothacher, Markus; Giardini, Domenico

    2013-04-01

    The development of GNSS technology with the potential of high-rate (up to 100Hz) GNSS (GPS, GLONASS, Galileo, Compass) records allows the monitoring of the seismic ground motions. In this study we show the potential of estimating the earthquake magnitude (Mw) and the fault geometry parameters (slip, depth, length, rake, dip, strike) during the propagation of seismic waves based on high-rate GPS network data and using a non-linear inversion algorithm. The examined area is the Valais (South-West Switzerland) where a permanent GPS network of 15 stations (COGEAR and AGNES GPS networks) is operational and where the occurrence of an earthquake of Mw≈6 is possible every 80 years. We test our methodology using synthetic events of magnitude 6.0-6.5 corresponding to normal fault according to most of the fault mechanisms of the area, for surface and buried rupture. The epicentres are located in the Valais close to the epicentre of previous historical earthquakes. For each earthquake, synthetic seismic data (velocity records) of 15 sites, corresponding to the current GPS network sites in Valais, were produced. The synthetic seismic data were integrated into displacement time-series. By jointly using these time-series with the Bernese GNSS Software 5.1 (modified), 10Hz sampling rate GPS records were generated assuming a noise of peak-to-peak amplitudes of ±1cm and ±3cm for the horizontal and for the vertical components, respectively. The GPS records were processed and resulted in kinematic time series from where the seismic displacements were derived and inverted for the magnitude and the fault geometry parameters. The inversion results indicate that it is possible to estimate both, the earthquake magnitudes and the fault geometry parameters in real-time (~10 seconds after the fault rupture). The accuracy of the results depends on the geometry of the GPS network and of the position of the earthquake epicentre.

  14. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  15. Stress Conditions at the Subduction Zone Inferred from Differential Earthquake Magnitudes

    NASA Astrophysics Data System (ADS)

    Choy, G. L.; Kirby, S. H.

    2011-12-01

    Moment magnitude MW and energy magnitude Me describe physically different aspects of the size of an earthquake. Me, being derived from radiated energy ES, is a measure of the seismic potential for damage. MW, being derived from seismic moment Mo, is a measure of the final static displacement of an earthquake. We examine the systematics of thrust earthquakes across the subduction zone environment by deriving differential magnitudes Delta M, where Delta M = Me - MW, of more than 1700 large shallow earthquakes (depth < 70km) that occurred from 1987 to 2010. Although Me may vary by as much as 1 magnitude unit for any given MW, the scatter is not random. Most subduction thrust earthquakes located within a narrow zone at the top surface of the Wadati-Benioff zone (which are interpreted as events on the slab interface) have Delta M < -0.30, a value much lower than the global average of -0.17. Of these interface events, the subset of large earthquakes (MW > 7.0) with anomalously low energy radiation (i.e., Delta M < 0.50) has been associated with the class of tsunamigenic events known as slow earthquakes. However, anomalously low radiated energy was also found for more than 308 earthquakes that had smaller magnitudes (MW < 7.0). The locations of these low-energy earthquakes do not correlate with locations of known slow tsunami earthquakes. On the other hand, anomalously high energy radiation (where Delta M>0.0) was found in 163 thrust events (only 12% of all subduction events). These earthquakes typically occur in high deformation zones that are intraslab, intracrustal or downdip of obliquely convergent plate boundaries. Of these high-energy events, a subset of intraslab events was found that caused local tsunami wave heights. Apparent stress taua can be related to differential magnitude by Delta M = (2/3) [ log(taua/mu) + 4.7] with mu being shear modulus. As specific tectonic settings seem to have characteristic differential magnitude, the relative stress conditions can

  16. Magnitude Problems in Historical Earthquake Catalogs and Their Impact on Seismic Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Mahdyiar, M.; Shen-Tu, B.; Shabestari, K.; Guin, J.

    2010-12-01

    A reliable historical earthquake catalog is a critical component for any regional seismic hazard analysis. In Europe, a number of historical earthquake catalogs have been compiled and used in constructing national or regional seismic hazard maps, for instance, Switzerland ECOS catalog by Swiss Seismological Service (2002), Italy CPTI catalog by CPTI Working Group (2004), Greece catalog by Papazachos et al. (2007), and CENEC (central, northern and northwestern Europe) catalog by Grünthal et al. (2009), Turkey catalog by Kalafat et al. (2007), and GSHAP catalog by Global Seismic Hazard Assessment Program (1999). These catalogs spatially overlap with each other to a large extent and employed a uniform magnitude scale (Mw). A careful review of these catalogs has revealed significant magnitude problems which can substantially impact regional seismic hazard assessment: 1) Magnitudes for the same earthquakes in different catalogs are discrepant. Such discrepancies are mainly driven by different regression relationships used to convert other magnitude scales or intensity into Mw. One of the consequences is magnitudes of many events in one catalog are systematically biased higher or lower with respect to those in another catalog. For example, the magnitudes of large historical earthquakes in the Italy CPTI catalog are systematically higher than those in Switzerland ECOS catalog. 2) Abnormally high frequency of large magnitude events is observed for some time period that intensities are the main available data. This phenomenon is observed in Italy CPTI catalog for the time period of 1870 to 1930. This may be due to biased conversion from intensity to magnitude. 3) A systematic bias in magnitude resulted in biased estimations for a- and b-values of the Gutenberg-Richter magnitude frequency relationships. It also affected the determination of upper bound magnitudes for various seismic source zones. All of these issues can lead to skewed seismic hazard results, or inconsistent

  17. Foreshocks Are Not Predictive of Future Earthquake Size

    NASA Astrophysics Data System (ADS)

    Page, M. T.; Felzer, K. R.; Michael, A. J.

    2014-12-01

    The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. A number of predictions can be made from the cascade model, including the fraction of earthquakes that are foreshocks to larger events, the distribution of differences between foreshock and mainshock magnitudes, and the distribution of time lags between foreshocks and mainshocks. The last should follow the inverse Omori law, which will cause the appearance of an accelerating seismicity rate if multiple foreshock sequences are stacked (Helmstetter and Sornette, 2003). All of these predictions are consistent with observations (Helmstetter and Sornette, 2003; Felzer et al. 2004). If foreshocks were to scale with mainshock size, this would be strong evidence against the cascade model. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results. In particular, we find that their catalogs are not complete to the level assumed, and their ETAS model underestimates inverse Omori behavior. To conclude, seismicity data to date is consistent with the hypothesis that the nucleation process is the same for earthquakes of all sizes.

  18. Fast determination of earthquake magnitude and fault extent from real-time P-wave recordings

    NASA Astrophysics Data System (ADS)

    Colombelli, Simona; Zollo, Aldo

    2015-08-01

    This work is aimed at the automatic and fast characterization of the extended earthquake source, through the progressive measurement of the P-wave displacement amplitude along the recorded seismograms. We propose a straightforward methodology to quickly characterize the earthquake magnitude and the expected length of the rupture, and to provide an approximate estimate of the average stress drop to be used for Earthquake Early Warning and rapid response purposes. We test the methodology over a wide distance and magnitude range using a massive Japan earthquake, accelerogram data set. Our estimates of moment magnitude, source duration/length and stress drop are consistent with the ones obtained by using other techniques and analysing the whole seismic waveform. In particular, the retrieved source parameters follow a self-similar, constant stress-drop scaling (median value of stress drop = 0.71 MPa). For the M 9.0, 2011 Tohoku-Oki event, both magnitude and length are underestimated, due to limited, available P-wave time window (PTWs) and to the low-frequency cut-off of analysed data. We show that, in a simulated real-time mode, about 1-2 seconds would be required for the source parameter determination of M 4-5 events, 3-10 seconds for M 6-7 and 30-40 s for M 8-8.5. The proposed method can also provide a rapid evaluation of the average slip on the fault plane, which can be used as an additional discriminant for tsunami potential, associated to large magnitude earthquakes occurring offshore.

  19. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed

    Tullis, T E

    1996-04-30

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. PMID:11607668

  20. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed Central

    Tullis, T E

    1996-01-01

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668

  1. Which data provide the most useful information about maximum earthquake magnitudes?

    NASA Astrophysics Data System (ADS)

    Zoeller, G.; Holschneider, M.

    2013-12-01

    In recent publications, it has been shown that earthquake catalogs are useful to estimate the maximum expected earthquake magnitude in a future time horizon Tf. However, earthquake catalogs alone do not allow to estimate the maximum possible magnitude M (Tf = ∞) in a study area. Therefore, we focus on the question, which data might be helpful to constrain M. Assuming a doubly-truncated Gutenberg-Richter law and independent events, optimal estimates of M depend solely on the largest observed magnitude μ regardless of all the other details in the catalog. For other models of the frequency-magnitude relation, this results holds in approximation. We show that the maximum observed magnitude μT in a known time interval T in the past provides provides the most powerful information on M in terms of the smallest confidence intervals. However, if high levels of confidence are required, the upper bound of the confidence interval may diverge. Geological or tectonic data, e.g. strain rates, might be helpful, if μT is not available; but these quantities can only serve as proxies for μT and will always lead to a higher degree of uncertainty and, therefore, to larger confidence intervals of M.

  2. Seismicity dynamics and earthquake predictability

    NASA Astrophysics Data System (ADS)

    Sobolev, G. A.

    2011-02-01

    Many factors complicate earthquake sequences, including the heterogeneity and self-similarity of the geological medium, the hierarchical structure of faults and stresses, and small-scale variations in the stresses from different sources. A seismic process is a type of nonlinear dissipative system demonstrating opposing trends towards order and chaos. Transitions from equilibrium to unstable equilibrium and local dynamic instability appear when there is an inflow of energy; reverse transitions appear when energy is dissipating. Several metastable areas of a different scale exist in the seismically active region before an earthquake. Some earthquakes are preceded by precursory phenomena of a different scale in space and time. These include long-term activation, seismic quiescence, foreshocks in the broad and narrow sense, hidden periodical vibrations, effects of the synchronization of seismic activity, and others. Such phenomena indicate that the dynamic system of lithosphere is moving to a new state - catastrophe. A number of examples of medium-term and short-term precursors is shown in this paper. However, no precursors identified to date are clear and unambiguous: the percentage of missed targets and false alarms is high. The weak fluctuations from outer and internal sources play a great role on the eve of an earthquake and the occurrence time of the future event depends on the collective behavior of triggers. The main task is to improve the methods of metastable zone detection and probabilistic forecasting.

  3. Statistical relations among earthquake magnitude, surface rupture length, and surface fault displacement

    USGS Publications Warehouse

    Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.

    1984-01-01

    In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement

  4. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  5. Practical approaches to earthquake prediction and warning

    NASA Astrophysics Data System (ADS)

    Kisslinger, Carl

    1984-04-01

    The title chosen for this renewal of the U.S.-Japan prediction seminar series reflects optimism, perhaps more widespread in Japan than in the United States, that research on earthquake prediction has progressed to a stage at which it is appropriate to begin testing operational forecast systems. This is not to suggest that American researchers do not recognize very substantial gains in understanding earthquake processes and earthquake recurrence, but rather that we are at the point of initiating pilot prediction experiments rather than asserting that we are prepared to start making earthquake predictions in a routine mode.For the sixth time since 1964, with support from the National Science Foundation and the Japan Society for the Promotion of Science, as well as substantial support from the U.S. Geological Survey (U.S.G.S.) for participation of a good representation of its own scientists, earthquake specialists from the two countries came together on November 7-11, 1983, to review progress of the recent past and share ideas about promising directions for future efforts. If one counts the 1980 Ewing symposium on prediction, sponsored by Lamont-Doherty Geological Observatory, which, though multinational, served the same purpose, one finds a continuity in these interchanges that has made them especially productive and stimulating for both scientific communities. The conveners this time were Chris Scholz, Lamont-Doherty, for the United States and Tsuneji Rikitake, Nihon University, for Japan.

  6. Analytical Conditions for Compact Earthquake Prediction Approaches

    NASA Astrophysics Data System (ADS)

    Sengor, T.

    2009-04-01

    This paper concerns itself with The atmosphere and ionosphere include non-uniform electric charge and current distributions during the earthquake activity. These charges and currents move irregularly when an activity is scheduled for an earthquake at the future. The electromagnetic characteristics of the region over the earth change to domains where irregular transportations of non-uniform electric charges are observed; therefore, the electromagnetism in the plasma, which moves irregularly and contains non-uniform charge distributions, is studied. These cases of charge distributions are called irregular and non-uniform plasmas. It is called the seismo-plasma if irregular and non-uniform plasma defines a real earthquake activity, which will come to truth. Some signals involving the above-mentioned coupling effects generate some analytical conditions giving the predictability of seismic processes [1]-[5]. These conditions will be discussed in this paper. 2 References [1] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes," IUGG Perugia 2007. [2] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes for Marmara Sea earthquakes," EGU 2008. [3] T. Sengor, "On the exact interaction mechanism of electromagnetically generated phenomena with significant earthquakes and the observations related the exact predictions before the significant earthquakes at July 1999-May 2000 period," Helsinki Univ. Tech. Electrom. Lab. Rept. 368, May 2001. [4] T. Sengor, "The Observational Findings Before The Great Earthquakes Of December 2004 And The Mechanism Extraction From Associated Electromagnetic Phenomena," Book of XXVIIIth URSI GA 2005, pp. 191, EGH.9 (01443) and Proceedings 2005 CD, New Delhi, India, Oct. 23-29, 2005. [5] T. Sengor, "The interaction mechanism among electromagnetic phenomena and geophysical-seismic-ionospheric phenomena with extraction for exact earthquake prediction genetics," 10

  7. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  8. HYPOELLIPSE; a computer program for determining local earthquake hypocentral parameters, magnitude, and first-motion pattern

    USGS Publications Warehouse

    Lahr, John C.

    1999-01-01

    This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.

  9. Seismomagnetic observation during the 8 july 1986 magnitude 5.9 north palm springs earthquake.

    PubMed

    Johnston, M J; Mueller, R J

    1987-09-01

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observation are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period. PMID:17801644

  10. Seismomagnetic observation during the 8 July 1986 magnitude 5.9 North Palm Springs earthquake

    USGS Publications Warehouse

    Johnston, M.J.S.; Mueller, R.J.

    1987-01-01

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observations are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.

  11. The role of the Federal government in the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Filson, J.R.

    1988-01-01

    Earthquake prediction research in the United States us carried out under the aegis of the National Earthquake Hazards Reduction Act of 1977. One of the objectives of the act is "the implementation in all areas of high or moderate seismic risk, of a system (including personnel and procedures) for predicting damaging earthquakes and for identifying, evaluating, and accurately characterizing seismic hazards." Among the four Federal agencies working under the 1977 act, the U.S Geological Survey (USGS) is responsible for earthquake prediction research and technological implementation. The USGS has adopted a goal that is stated quite simply; predict the time, place, and magnitude of damaging earthquakes. The Parkfield earthquake prediction experiment represents the msot concentrated and visible effor to date to test progress toward this goal. 

  12. Estimates of the magnitude of aseismic slip associated with small earthquakes near San Juan Bautista, CA

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Simons, M.

    2013-12-01

    The recurrence intervals of repeating earthquakes raise the possibility that much of the slip associated with small earthquakes is aseismic. To test this hypothesis, we examine the co- and post-seismic strain changes associated with Mc 2 to 4 earthquakes on the San Andreas Fault. We consider several thousand events that occurred near USGS strainmeter SJT, at the northern end of the creeping section. Most of the strain changes associated with these events are below the noise level on a single record, so we bin the earthquakes into 3 to 5 groups according to their magnitude. We then invert for an average time history of strain per seismic moment for each group. The seismic moment M0 is assumed to scale as 10β Mc, where Mc is the preferred magnitude in the NCSN catalog, and β is between 1.1 and 1.6. We try several approaches to account for the spatial pattern of strain, but we focus on the ɛE-N strain component (east extension minus north extension) because it is the most robust to model. Each of the estimated strain time series displays a step at the time of the earthquakes. The ratio of the strain step to seismic moment is larger for the bin with smaller events. If we assume that M0~ 101.5Mc, the ratio increases by a factor of 3 to 5 per unit decrease in Mc. This increase in strain per moment would imply that most of the slip within an hour of small events is aseismic. For instance, the aseismic moment of a Mc 2 earthquake would be at least 5 to 10 times the seismic moment. However, much of the variation in strain per seismic moment is eliminated for a smaller but still plausible value of β. If M0~101.2Mc, the strain per moment increases by about a factor of 2 per unit decrease in Mc.

  13. Intermediate- and long-term earthquake prediction.

    PubMed

    Sykes, L R

    1996-04-30

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. PMID:11607658

  14. Intermediate- and long-term earthquake prediction.

    PubMed Central

    Sykes, L R

    1996-01-01

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. Images Fig. 1 PMID:11607658

  15. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  16. Earthquake frequency-magnitude distribution and fractal dimension in mainland Southeast Asia

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi; Choowong, Montri

    2014-12-01

    The 2004 Sumatra and 2011 Tohoku earthquakes highlighted the need for a more accurate understanding of earthquake characteristics in both regions. In this study, both the a and b values of the frequency-magnitude distribution (FMD) and the fractal dimension ( D C ) were investigated simultaneously from 13 seismic source zones recognized in mainland Southeast Asia (MLSEA). By using the completeness earthquake dataset, the calculated values of b and D C were found to imply variations in seismotectonic stress. The relationships of D C -b and D C -( a/ b) were investigated to categorize the level of earthquake hazards of individual seismic source zones, where the calibration curves illustrate a negative correlation between the D C and b values ( D c = 2.80 - 1.22 b) and a positive correlation between the D C and a/ b ratios ( D c = 0.27( a/ b) - 0.01) with similar regression coefficients ( R 2 = 0.65 to 0.68) for both regressions. According to the obtained relationships, the Hsenwi-Nanting and Red River fault zones revealed low-stress accumulations. Conversely, the Sumatra-Andaman interplate and intraslab, the Andaman Basin, and the Sumatra fault zone were defined as high-tectonic stress regions that may pose risks of generating large earthquakes in the future.

  17. A moment-tensor catalog for intermediate magnitude earthquakes in Mexico

    NASA Astrophysics Data System (ADS)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo

    2016-04-01

    Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism

  18. A fault kinematic based assessment of Maximum Credible Earthquake magnitudes for the slow Vienna Basin Fault

    NASA Astrophysics Data System (ADS)

    Decker, Kurt; Beidinger, Andreas; Hintersberger, Esther

    2010-05-01

    normal faults splaying from the strike-slip system appears to be an important factor controlling fault segmentation. In order to assess MCE magnitudes for this complex tectonic setting on the background of earthquake data spanning a time of only 500 yrs (i.e., shorter than the expected recurrence times of the strongest earthquakes) we choose a deterministic approach using a 3D fault model quantifying the lengths and areas of potential rupture zones. The model accounts for kinematic fault segmentation. Fault surfaces of strike-slip segments vary from 55 km² to more than 400 km², those of the normal splay faults from 100 to 300 km². Empirical relations confirm that these areas are sufficiently large to create earthquakes with M=6.0-6.5. The possibility of even stronger events caused by multi-segments ruptures, however, cannot be excluded at present. The estimated MCE magnitudes are generally in line with newly obtained paleoseismological information from one of the splay faults of the VBTF(Markgrafneusiedl Fault). Preliminary data reveal that single slip events at this fault show surface displacements up to 20 cm compatible with earthquake magnitudes M≥6. Archaeoseismological data indicating a M~6.0-6.3 earthquake at the Lassee strike-slip segment further support the validity of our approach.

  19. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    NASA Astrophysics Data System (ADS)

    Barton, D. J.; Foulger, G. R.; Henderson, J. R.; Julian, B. R.

    1999-08-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  20. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    USGS Publications Warehouse

    Barton, D.J.; Foulger, G.R.; Henderson, J.R.; Julian, B.R.

    1999-01-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  1. New data about small-magnitude earthquakes of the ultraslow-spreading Gakkel Ridge, Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Morozov, Alexey N.; Vaganova, Natalya V.; Ivanova, Ekaterina V.; Konechnaya, Yana V.; Fedorenko, Irina V.; Mikhaylova, Yana A.

    2016-01-01

    At the present time there is available detailed bathymetry, gravimetric, magnetometer, petrological, and seismic (mb > 4) data for the Gakkel Ridge. However, so far not enough information has been obtained on the distribution of small-magnitude earthquakes (or microearthquakes) within the ridge area due to the absence of a suitable observation system. With the ZFI seismic station (80.8° N, 47.7° E), operating since 2011 at the Frantz Josef Land Archipelago, we can now register small-magnitude earthquakes down to 1.5 ML within the Gakkel Ridge area. This article elaborates on the results and analysis of the ZFI station seismic monitoring obtained for the period from December 2011 to January 2015. In order to improve the accuracy of the earthquakes epicenter locations, velocity models and regional seismic phase travel-times for spreading ridges in areas within the Euro-Arctic Region have been calculated. The Gakkel Ridge is seismically active, regardless of having the lowest spreading velocity among global mid-ocean ridges. Quiet periods alternate with periods of higher seismic activity. Earthquakes epicenters are unevenly spread across the area. Most of the epicenters are assigned to the Sparsely Magmatic Zone, more specifically, to the area between 1.5° E and 19.0° E. We hypothesize that assignment of most earthquakes to the SMZ segment can be explained by the amagmatic character of the spreading of this segment. The structuring of this part of the ridge is characterized by the prevalence of tectonic processes, not magmatic or metamorphic ones.

  2. Stress drop in the sources of intermediate-magnitude earthquakes in northern Tien Shan

    NASA Astrophysics Data System (ADS)

    Sycheva, N. A.; Bogomolov, L. M.

    2014-05-01

    The paper is devoted to estimating the dynamical parameters of 14 earthquakes with intermediate magnitudes (energy class 11 to 14), which occurred in the Northern Tien Shan. For obtaining the estimates of these parameters, including the stress drop, which could be then applied in crustal stress reconstruction by the technique suggested by Yu.L. Rebetsky (Schmidt Institute of Physics of the Earth, Russian Academy of Sciences), we have improved the algorithms and programs for calculating the spectra of the seismograms. The updated products allow for the site responses and spectral transformations during the propagation of seismic waves through the medium (the effect of finite Q-factor). By applying the new approach to the analysis of seismograms recorded by the seismic KNET network, we calculated the radii of the sources (Brune radius), scalar seismic moment, and stress drop (release) for the studied 14 earthquakes. The analysis revealed a scatter in the source radii and stress drop even among the earthquakes that have almost identical energy classes. The stress drop by different earthquakes ranges from one to 75 bar. We have also determined the focal mechanisms and stress regime of the Earth's crust. It is worth noting that during the considered period, strong seismic events with energy class above 14 were absent within the segment covered by the KNET stations.

  3. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  4. Earthquake Magnitude: A Teaching Module for the Spreadsheets Across the Curriculum Initiative

    NASA Astrophysics Data System (ADS)

    Wetzel, L. R.; Vacher, H. L.

    2006-12-01

    Spreadsheets Across the Curriculum (SSAC) is a library of computer-based activities designed to reinforce or teach quantitative-literacy or mathematics concepts and skills in context. Each activity (called a "module" in the SSAC project) consists of a PowerPoint presentation with embedded Excel spreadsheets. Each module focuses on one or more problems for students to solve. Each student works through a presentation, thinks about the in-context problem, figures out how to solve it mathematically, and builds the spreadsheets to calculate and examine answers. The emphasis is on mathematical problem solving. The intention is for the in- context problems to span the entire range of subjects where quantitative thinking, number sense, and math non-anxiety are relevant. The self-contained modules aim to teach quantitative concepts and skills in a wide variety of disciplines (e.g., health care, finance, biology, and geology). For example, in the Earthquake Magnitude module students create spreadsheets and graphs to explore earthquake magnitude scales, wave amplitude, and energy release. In particular, students realize that earthquake magnitude scales are logarithmic. Because each step in magnitude represents a 10-fold increase in wave amplitude and approximately a 30-fold increase in energy release, large earthquakes are much more powerful than small earthquakes. The module has been used as laboratory and take-home exercises in small structural geology and solid earth geophysics courses with upper level undergraduates. Anonymous pre- and post-tests assessed students' familiarity with Excel as well as other quantitative skills. The SSAC library consists of 27 modules created by a community of educators who met for one-week "module-making workshops" in Olympia, Washington, in July of 2005 and 2006. The educators designed the modules at the workshops both to use in their own classrooms and to make available for others to adopt and adapt at other locations and in other classes

  5. Instrumental magnitude constraints for the 1889 Chilik and the 1887 Verny earthquake, Central Asia

    NASA Astrophysics Data System (ADS)

    Krueger, Frank; Kulikova, Galina; Landgraf, Angela

    2016-04-01

    A series of four large earthquakes hit the continental collision region north of Lake Issyk Kul in the years 1885, 1887, 1889 and 1911 with magnitudes above 6.9. The largest event was the Chilik earthquake on July 11, 1889 with M 8.3 based on macroseismic intensities, recently confirmed by Bindi et al. (2013). Despite the existence of several juvenile fault scarps in the epicentral region no on scale through-going surface rupture has been located. Rupture length of ~200 km and slip of ~10 m are expected for M 8.3 (Blaser et al., 2010). The lack of high concentrated epicentral intensities require a hypocenter depth of 40 km located in the lower crust. Late coda envelope amplitude comparison of modern events in Central Asia recorded at stations in Northern Germany with the reproduction of a Rebeur-Paschwitz pendulum seismogram recorded at Wilhelmshaven results in a magnitude estimate of Mw 8.0-8.5. Amplitude comparison of longperiod surface waves measured on magnetograms at two british geomagnetic observatories favors a magnitude of Mw 8.0. Both can be made consistent if a station site factor of 2-4 for the Wilhelmshaven station is applied (for which indications exist). A truly deep centroid depth (h>40 km) is unlikely (from coda amplitude scaling), a shallow rupture of appropriate length is till now not discovered. Both arguments point to a possible lower crust contribution to the seismic moment. Magnetogram amplitudes for the Jun 8, 1887, Verny earthquake point to a magnitude of M ~7.5-7.6 (preliminary).

  6. New methods for predicting the magnitude of sunspot maximum

    NASA Technical Reports Server (NTRS)

    Brown, G. M.

    1979-01-01

    Three new and independent methods of predicting the magnitude of a forthcoming sunspot maximum are suggested. The longest lead time is given by the first method, which is based on a terrestrial parameter measured during the declining phase of the preceding cycle. The second method, with only a slightly shorter foreknowledge, is based on an interplanetary parameter derived around the commencement of the cycle in question (sunspot minimum). The third method, giving the shortest prediction lead-time, is based entirely on solar parameters measured during the initial progress of the cycle in question. Application of all three methods to forecast the magnitude of the next maximum (Cycle 21) agree in predicting that it is likely to be very similar to that of Cycle 18.

  7. Finding the Shadows: Local Variations in the Stress Field due to Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Latimer, C.; Tiampo, K.; Rundle, J.

    2009-05-01

    Stress shadows, regions of static stress decrease associated with large magnitude earthquake have typically been described through several characteristics or parameters such as location, duration, and size. These features can provide information about the physics of the earthquake itself, as static stress changes are dependent on the following parameters: the regional stress orientations, the coefficient of friction, as well as the depth of interest (King et al, 1994). Areas of stress decrease, associated with a decrease in the seismicity rate, while potentially stable in nature, have been difficult to identify in regions of high rates of background seismicity (Felzer and Brodsky, 2005; Hardebeck et al., 1998). In order to obtain information about these stress shadows, we can determine their characteristics by using the Pattern Informatics (PI) method (Tiampo et al., 2002; Tiampo et al., 2006). The PI method is an objective measure of seismicity rate changes that can be used to locate areas of increases and/or decreases relative to the regional background rate. The latter defines the stress shadows for the earthquake of interest, as seismicity rate changes and stress changes are related (Dieterich et al., 1992; Tiampo et al., 2006). Using the data from the PI method, we can invert for the parameters of the modeled half-space using a genetic algorithm inversion technique. Stress changes will be calculated using coulomb stress change theory (King et al., 1994) and the Coulomb 3 program is used as the forward model (Lin and Stein, 2004; Toda et al., 2005). Changes in the regional stress orientation (using PI results from before and after the earthquake) are of the greatest interest as it is the main factor controlling the pattern of the coulomb stress changes resulting from any given earthquake. Changes in the orientation can lead to conclusions about the local stress field around the earthquake and fault. The depth of interest and the coefficient of friction both

  8. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  9. A radon detector for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Dacey, James

    2010-04-01

    Recent events in Haiti and Chile remind us of the devastation that can be wrought by an earthquake, especially when it strikes without warning. For centuries, people living in seismically active regions have reported a number of strange occurrences immediately prior to a quake, including unexpected weather phenomena and even unusual behaviour among animals. In more recent times, some scientists have suggested other precursors, such as sporadic bursts of electromagnetic radiation from the fault zone. Unfortunately, none of these suggestions has led to a robust, scientific method for earthquake prediction. Now, however, a group of physicists, led by physics Nobel laureate Georges Charpak, has developed a new detector that could measure one of the more testable earthquake precursors - the suggestion that radon gas is released from fault zones prior to earth slipping, writes James Dacey.

  10. Triggered slip on the Calaveras fault during the magnitude 7. 1 Loma Prieta, California, earthquake

    SciTech Connect

    McClellan, P.H.; Hay, E.A.

    1990-07-01

    After the magnitude (M) 7.1 Loma Prieta earthquake on the San Andreas fault the authors inspected selected sites along the Calaveras fault for evidence of recent surface displacement. In two areas along the Calaveras fault they documented recent right-lateral offsets of cultural features by at least 5 mm within zones of recognized historical creep. The areas are in the city of Hollister and at Highway 152 near San Felipe Lake, located approximately 25 km southeast and 18 km northeast, respectively, of the nearest part of the San Andreas rupture zone. On the basis of geologic evidence the times of the displacement events are constrained to within days or hours of the Loma Prieta mainshock. They conclude that this earthquake on the San Andreas fault triggered surface rupture along at least a 17-km-long segment of the Calaveras fault. These geologic observations extend evidence of triggered slip from instrument stations within this zone of Calaveras fault rupture.

  11. An application of earthquake prediction algorithm M8 in eastern Anatolia at the approach of the 2011 Van earthquake

    NASA Astrophysics Data System (ADS)

    Mojarab, Masoud; Kossobokov, Vladimir; Memarian, Hossein; Zare, Mehdi

    2015-07-01

    On 23rd October 2011, an M7.3 earthquake near the Turkish city of Van, killed more than 600 people, injured over 4000, and left about 60,000 homeless. It demolished hundreds of buildings and caused great damages to thousand others in Van, Ercis, Muradiye, and Çaldıran. The earthquake's epicenter is located about 70 km from a preceding M7.3 earthquake that occurred in November 1976 and destroyed several villages near the Turkey-Iran border and killed thousands of people. This study, by means of retrospective application of the M8 algorithm, checks to see if the 2011 Van earthquake could have been predicted. The algorithm is based on pattern recognition of Times of Increased Probability (TIP) of a target earthquake from the transient seismic sequence at lower magnitude ranges in a Circle of Investigation (CI). Specifically, we applied a modified M8 algorithm adjusted to a rather low level of earthquake detection in the region following three different approaches to determine seismic transients. In the first approach, CI centers are distributed on intersections of morphostructural lineaments recognized as prone to magnitude 7 + earthquakes. In the second approach, centers of CIs are distributed on local extremes of the seismic density distribution, and in the third approach, CI centers were distributed uniformly on the nodes of a 1∘×1∘ grid. According to the results of the M8 algorithm application, the 2011 Van earthquake could have been predicted in any of the three approaches. We noted that it is possible to consider the intersection of TIPs instead of their union to improve the certainty of the prediction results. Our study confirms the applicability of a modified version of the M8 algorithm for predicting earthquakes at the Iranian-Turkish plateau, as well as for mitigation of damages in seismic events in which pattern recognition algorithms may play an important role.

  12. Giant seismites and megablock uplift in the East African Rift: Evidence for large magnitude Late Pleistocene earthquakes

    NASA Astrophysics Data System (ADS)

    Hilbert-Wolf, Hannah; Roberts, Eric

    2015-04-01

    -dkm-scale clastic injection dykes. Our documentation provides evidence for M 6-7.5+ Late Pleistocene earthquakes, similar to the M7.4 earthquake at the same location in 1910, extending the record of large-magnitude earthquakes beyond the last century. Our study not only expands the database of seismogenic sedimentary structures, but also attests to repeated, large-magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms the crust is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift System and other tectonically active, developing regions.

  13. 76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 1-day meeting on April 16, 2011. The meeting... the Director of the U.S. Geological Survey on proposed earthquake predictions, on the completeness...

  14. Earthquakes of moderate magnitude recorded at the Salt Lake paleoseimic site on the Haiyuan Fault, China

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Shao, Yanxiu; Xie, Kejia; Klinger, Yann; Lei, Zhongsheng; Yuan, Daoyang

    2013-04-01

    The active left-lateral Haiyuan fault is one of the major continental strike-slip faults in the Tibetan Plateau. The last large earthquake occurred on the fault is the great 1920 M~8 Haiyuan earthquake with a 230-km-long surface rupture and maximum surface slip of 11 m (Zhang et al., 1987). Much less known is its earthquake recurrence behavior. We present preliminary results on a paleoseismic study at the Salt Lake site, at a shortcut pull-apart basin, within the section that broke in 1920. 3D excavation at the site exposed 7 m of fine-grained and layered stratigraphy and ample evidence of 6-7 paleoseismic events. AMS dating of charcoal fragments constrains that the events occurred during the past 3600 years. Of these, the youngest 3-4 events are recorded in the top 2.5m section of distinctive thinly-layered stratigraphy of alternating reddish well-sorted granule sand and light gray silty fine sand. The section has been deposited since ~1550 A.D., suggesting 3-4 events occurred during the past 400 years, and an average recurrence interval of less than 150 years, surprisingly short for the Haiyuan fault, with a slip rate of arguably ~10 mm/yr or less. A comparison of paleoseismic with historical earthquake record is possible for the Haiyuan area, a region with written accounts of earthquake effects dated back to 1000 A.D.. Between 1600 A.D. and present, each of the four paleoseismic events can be correlated to one historically recorded event, within the uncertainties of paleoseismic age ranges. Nonetheless, these events are definitely not 1920-type large earthquakes, because their shaking effects were only recorded locally, rather than regionally. More and more studies show that M5 to 6 events are capable of causing ground deformation. Our results indicate that it can be misleading to simply use the time between consecutive events as the recurrence interval at a single paleoseismic site, without information of event size. Mixed events of different magnitudes in the

  15. Influence of weak motion data to magnitude dependence of PGA prediction model in Austria

    NASA Astrophysics Data System (ADS)

    Jia, Yan

    2015-04-01

    Data recorded by the STS2-sensors at the Austrian Seismic Network were differentiated and used to derive the PGA prediction model for Austria (Jia and Lenhardt, 2010). Before using it to our hazard assessment and real time shakemap, it is necessary to validate this model and obtain a deep understanding about it. In this paper, influence of weak motion data to the magnitude dependence of our prediction model was studied. In addition, spatial PGA residuals between the measurements and predictions were investigated as well. There are 127 earthquakes with a magnitude between 3 and 5.4 that were used to derive the PGA prediction model published in 2011. Unfortunately, 90% of used PGA measurements were made for the events with a magnitude smaller than 4. Only ten quakes among them have a magnitude larger than 4, which is the important magnitude range that needs our attention and hazard assessment. In this investigation, 127 earthquakes were divided into two groups: the first group only includes events with a magnitude smaller than 4, while the second group contains quakes with a magnitude larger than 4. By using the same modeling for estimating PGA attenuation in 2011, coefficients of the model were inverted from the measurements in two groups and compared to the one based on the complete data set. It was found that the group with the weak quakes returned results that only have small differences to the one from all 127 events, while the group with strong quakes (ml> 4) gave greater magnitude dependence than the model published in 2011. The distance coefficients stayed nearly unchanged for all three inversions. As the second step, spatial PGA residuals between the measurements and the predictions from our model were investigated. As explained in Jia and Lenhardt (2013), there are some differences in the site amplifications between the West- and the East-Austria. For a fair comparison, residuals were normalized for each station before the investigation. Then normalized

  16. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  17. Earthquake prediction in seismogenic areas of the Iberian Peninsula based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.

    2013-05-01

    A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity

  18. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  19. A local earthquake coda magnitude and its relation to duration, moment M sub O, and local Richter magnitude M sub L

    NASA Technical Reports Server (NTRS)

    Suteau, A. M.; Whitcomb, J. H.

    1977-01-01

    A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.

  20. Is It Possible to Predict Strong Earthquakes?

    NASA Astrophysics Data System (ADS)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  1. The 2009 earthquake, magnitude mb 4.8, in the Pantanal Wetlands, west-central Brazil.

    PubMed

    Dias, Fábio L; Assumpção, Marcelo; Facincani, Edna M; França, George S; Assine, Mario L; Paranhos, Antônio C; Gamarra, Roberto M

    2016-09-01

    The main goal of this paper is to characterize the Coxim earthquake occurred in June 15th, 2009 in the Pantanal Basin and to discuss the relationship between its faulting mechanism with the Transbrasiliano Lineament. The earthquake had maximum intensity MM V causing damage in farm houses and was felt in several cities located around, including Campo Grande and Goiânia. The event had an mb 4.8 magnitude and depth was 6 km, i.e., it occurred in the upper crust, within the basement and 5 km below the Cenozoic sedimentary cover. The mechanism, a thrust fault mechanism with lateral motion, was obtained by P-wave first-motion polarities and confirmed by regional waveform modelling. The two nodal planes have orientations (strike/dip) of 300°/55° and 180°/55° and the orientation of the P-axis is approximately NE-SW. The results are similar to the Pantanal earthquake of 1964 with mb 5.4 and NE-SW compressional axis. Both events show that Pantanal Basin is a seismically active area, under compressional stress. The focal mechanism of the 1964 and 2009 events have no nodal plane that could be directly associated with the main SW-NE trending Transbrasiliano system indicating that a direct link of the Transbrasiliano with the seismicity in the Pantanal Basin is improbable. PMID:27580359

  2. Collective properties of injection-induced earthquake sequences: 2. Spatiotemporal evolution and magnitude frequency distributions

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny; Huang, Yihe

    2016-05-01

    Probabilistic seismic hazard assessment for induced seismicity depends on reliable estimates of the locations, rate, and magnitude frequency properties of earthquake sequences. The purpose of this paper is to investigate how variations in these properties emerge from interactions between an evolving fluid pressure distribution and the mechanics of rupture on heterogeneous faults. We use an earthquake sequence model, developed in the first part of this two-part series, that computes pore pressure evolution, hypocenter locations, and rupture lengths for earthquakes triggered on 1-D faults with spatially correlated shear stress. We first consider characteristic features that emerge from a range of generic injection scenarios and then focus on the 2010-2011 sequence of earthquakes linked to wastewater disposal into two wells near the towns of Guy and Greenbrier, Arkansas. Simulations indicate that one reason for an increase of the Gutenberg-Richter b value for induced earthquakes is the different rates of reduction of static and residual strength as fluid pressure rises. This promotes fault rupture at lower stress than equivalent tectonic events. Further, b value is shown to decrease with time (the induced seismicity analog of b value reduction toward the end of the seismic cycle) and to be higher on faults with lower initial shear stress. This suggests that faults in the same stress field that have different orientations, and therefore different levels of resolved shear stress, should exhibit seismicity with different b-values. A deficit of large-magnitude events is noted when injection occurs directly onto a fault and this is shown to depend on the geometry of the pressure plume. Finally, we develop models of the Guy-Greenbrier sequence that captures approximately the onset, rise and fall, and southwest migration of seismicity on the Guy-Greenbrier fault. Constrained by the migration rate, we estimate the permeability of a 10 m thick critically stressed basement

  3. On the earthquake predictability of fault interaction models

    PubMed Central

    Marzocchi, W; Melini, D

    2014-01-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643

  4. Recurrence quantification analysis for detecting dynamical changes in earthquake magnitude time series

    NASA Astrophysics Data System (ADS)

    Lin, Min; Zhao, Gang; Wang, Gang

    2015-12-01

    In this study, recurrence plot (RP) and recurrence quantification analysis (RQA) techniques are applied to a magnitude time series composed of seismic events occurred in California region. Using bootstrapping techniques, we give the statistical test of the RQA for detecting dynamical transitions. From our results, we find the different patterns of RPs for magnitude time series before and after the M6.1 Joshua Tree Earthquake. RQA measurements of determinism (DET) and laminarity (LAM) quantifying the order with confidence levels also show peculiar behaviors. It is found that DET and LAM values of the recurrence-based complexity measure significantly increase to a large value at the main shock, and then gradually recovers to a small values after it. The main shock and its aftershock sequences trigger a temporary growth in order and complexity of the deterministic structure in the RP of seismic activity. It implies that the onset of the strong earthquake event is reflected in a sharp and great simultaneous change in RQA measures.

  5. Spatial variations in the frequency-magnitude distribution of earthquakes at Mount Pinatubo volcano

    USGS Publications Warehouse

    Sanchez, J.J.; McNutt, S.R.; Power, J.A.; Wyss, M.

    2004-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is mapped in two and three dimensions at Mount Pinatubo, Philippines, to a depth of 14 km below the summit. We analyzed 1406 well-located earthquakes with magnitudes MD ???0.73, recorded from late June through August 1991, using the maximum likelihood method. We found that b-values are higher than normal (b = 1.0) and range between b = 1.0 and b = 1.8. The computed b-values are lower in the areas adjacent to and west-southwest of the vent, whereas two prominent regions of anomalously high b-values (b ??? 1.7) are resolved, one located 2 km northeast of the vent between 0 and 4 km depth and a second located 5 km southeast of the vent below 8 km depth. The statistical differences between selected regions of low and high b-values are established at the 99% confidence level. The high b-value anomalies are spatially well correlated with low-velocity anomalies derived from earlier P-wave travel-time tomography studies. Our dataset was not suitable for analyzing changes in b-values as a function of time. We infer that the high b-value anomalies around Mount Pinatubo are regions of increased crack density, and/or high pore pressure, related to the presence of nearby magma bodies.

  6. 77 FR 53225 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: Department of the... National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\ day meeting on September 17 and 18, 2012, at the U.S. Geological Survey National Earthquake Information Center (NEIC),...

  7. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey, Interior. ACTION: Notice of meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... proposed earthquake predictions, on the completeness and scientific validity of the available data...

  8. The late Professor Takahiro Hagiwara: His career with earthquake prediction

    NASA Astrophysics Data System (ADS)

    Ohtake, Masakazu

    2004-08-01

    Takahiro Hagiwara, Professor Emeritus of the University of Tokyo, was born in 1908, and passed away in 1999. His name is inseparably tied with earthquake prediction, especially as the founder of the earthquake prediction program of Japan, and as a distinguished leader of earthquake prediction research in the world. This short article describes the career of Prof. Hagiwara focusing on his contribution to earthquake prediction research. I also sketch his activities in the development of instruments, and the multi-disciplinary observation of the Matsushiro earthquake swarm to show the starting point of his scientific strategy: good observation.

  9. A Comprehensive Mathematical Model for the Correlation of Earthquake Magnitude with Geochemical Measurements. A Case Study: the Nisyros Volcano in Greece

    SciTech Connect

    Verros, G. D.; Latsos, T.; Liolios, C.; Anagnostou, K. E.

    2009-08-13

    A comprehensive mathematical model for the correlation of geological phenomena such as earthquake magnitude with geochemical measurements is presented in this work. This model is validated against measurements, well established in the literature, of {sup 220}Rn/{sup 222}Rn in the fumarolic gases of the Nisyros Island, Aegean Sea, Greece. It is believed that this model may be further used to develop a generalized methodology for the prediction of geological phenomena such as earthquakes and volcanic eruptions in the vicinity of the Nisyros Island.

  10. Maximum Magnitude and Probabilities of Induced Earthquakes in California Geothermal Fields: Applications for a Science-Based Decision Framework

    NASA Astrophysics Data System (ADS)

    Weiser, Deborah Anne

    Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.

  11. Location and local magnitude of the Tocopilla earthquake sequence of Northern Chile

    NASA Astrophysics Data System (ADS)

    Fuenzalida, A.; Lancieri, M.; Madariaga, R. I.; Sobiesiak, M.

    2010-12-01

    The Northern Chile gap is generally considered to the site of the next megathurst event in Chile. The Tocopilla earthquake of 14 November 2007 (Mw 7.8) and aftershock series broke the southern end of this gap. The Tocopilla event ruptured a narrow strip of 120 km of length and a width that (Peyrat et al.; Delouis et al. 2009) estimated as 30 km. The aftershock sequence comprises five large thrust events with magnitude greater than 6. The main aftershock of Mw 6.7 occurred on November 15, at 15:06 (UTM) seawards of the Mejillones Peninsula. One month later, on December 16 2007, a strong (Mw 6.8) intraplate event with slab-push mechanism occurred near the bottom of the rupture zone. These events represent a unique opportunity for the study of earthquakes in Northern Chile because of the quantity and quality of available data. In the epicentral area, the IPOC network was deployed by GFZ, CNRS/INSU and DGF before the main event. This is a digital, continuously recording network, equipped with both strong-motion and broad-band instrument. On 29 November 2007 a second network named “Task Force” (TF) was deployed by GFZ to study the aftershocks. This is a dense network, installed near the Mejillones peninsula. It is composed by 20 short-period instruments. The slab-push event of 16 december 2007 occurred in the middle of the area covered by the TF network. Aftershocks were detected using an automatic procedure and manually revised in order to pick P and S arrivals. In the 14-28 November period, we detected 635 events recorded at the IPOC network; and a further 552 events were detected between 29 November and 16 December before the slab-push event using the TF network. The events were located using a vertically layered velocity model (Husen et al. 1999), using the NLLoc software of Lomax et al. From the broadband data we estimated the moment magnitude from the displacement spectra of the events. From the short-period instruments we evaluated local magnitudes using the

  12. Spatial variations in the frequency-magnitude distribution of earthquakes in the southwestern Okinawa Trough

    NASA Astrophysics Data System (ADS)

    Lin, J.-Y.; Sibuet, J.-C.; Lee, C.-S.; Hsu, S.-K.; Klingelhoefer, F.

    2007-04-01

    The relations between the frequency of occurrence and the magnitude of earthquakes are established in the southern Okinawa Trough for 2823 relocated earthquakes recorded during a passive ocean bottom seismometer experiment. Three high b-values areas are identified: (1) for an area offshore of the Ilan Plain, south of the andesitic Kueishantao Island from a depth of 50 km to the surface, thereby confirming the subduction component of the island andesites; (2) for a body lying along the 123.3°E meridian at depths ranging from 0 to 50 km that may reflect the high temperature inflow rising up from a slab tear; (3) for a third cylindrical body about 15 km in diameter beneath the Cross Backarc Volcanic Trail, at depths ranging from 0 to 15 km. This anomaly might be related to the presence of a magma chamber at the base of the crust already evidenced by tomographic and geochemical results. The high b-values are generally linked to magmatic and geothermal activities, although most of the seismicity is linked to normal faulting processes in the southern Okinawa Trough.

  13. Prediction of Earthquakes by Lunar Cicles

    NASA Astrophysics Data System (ADS)

    Rodriguez, G.

    2007-05-01

    Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme

  14. An evaluation of the seismic- window theory for earthquake prediction.

    USGS Publications Warehouse

    McNutt, M.; Heaton, T.H.

    1981-01-01

    Reports studies designed to determine whether earthquakes in the San Francisco Bay area respond to a fortnightly fluctuation in tidal amplitude. It does not appear that the tide is capable of triggering earthquakes, and in particular the seismic window theory fails as a relevant method of earthquake prediction. -J.Clayton

  15. InSAR constraints on the kinematics and magnitude of the 2001 Bhuj earthquake

    NASA Astrophysics Data System (ADS)

    Schmidt, D.; Bürgmann, R.

    2005-12-01

    The Mw 7.6 Bhuj intraplate event occurred along a blind thrust within the Kutch Rift basin of western India in January of 2001. The lack of any surface rupture and limited geodetic data have made it difficult to place the event on a known fault and constrain its source parameters. Moment tensor solutions and aftershock relocations indicate that the earthquake was a reverse event along an east-west striking, south dipping fault. In an effort to image the surface deformation, we have processed a total of 9 interferograms that span the coseismic event. Interferometry has proven difficult for the region because of technical difficulties experienced by the ERS Satellite around the time of the earthquake and because of low coherence. The stabilization of the orbital control by the European Space Agency beginning in 2002 has allowed us to interfere more recent SAR data with pre-earthquake data. Therefore, all available interferograms of the event include the first year of any postseismic deformation. The source region is characterized by broad floodplains interrupted by isolated highlands. Coherence is limited to the surrounding highlands and no data is available directly over the epicenter. Using the InSAR data along two descending and one ascending tracks, we perform a gridded search for the optimal source parameters of the earthquake. The deformation pattern is modeled assuming uniform slip on an elastic dislocation. Since the highland regions are discontinuous, the coherent InSAR phase is isolated to several individual patches. For each iteration of the gridded search algorithm, we optimize the fit to the data by solving for number of 2π phase cycles between coherent patches and the orbital gradient across each interferogram. Since the look angle varies across a SAR scene, a variable unit vector is calculated for each track. Inversion results place the center of the fault plane at 70.33° E/23.42° N at a depth of 21 km, and are consistent with the strike and dip

  16. An earthquake-like magnitude-frequency distribution of slow slip in northern Cascadia

    NASA Astrophysics Data System (ADS)

    Wech, Aaron G.; Creager, Kenneth C.; Houston, Heidi; Vidale, John E.

    2010-11-01

    Major episodic tremor and slip (ETS) events with Mw 6.4 to 6.7 repeat every 15 ± 2 months within the Cascadia subduction zone under the Olympic Peninsula. Although these major ETS events are observed to release strain, smaller “tremor swarms” without detectable geodetic deformation are more frequent. An automatic search from 2006-2009 reveals 20,000 five-minute windows containing tremor which cluster in space and time into 96 tremor swarms. The 93 inter-ETS tremor swarms account for 45% of the total duration of tremor detection during the last three ETS cycles. The number of tremor swarms, N, exceeding duration τ follow a power-law distribution N $\\propto$ τ-0.66. If duration is proportional to moment release, the slip inferred from these swarms follows a standard Gutenberg-Richter logarithmic frequency-magnitude relation, with the major ETS events and smaller inter-ETS swarms lying on the same trend. This relationship implies that 1) inter-ETS slip is fundamentally similar to the major events, just smaller and more frequent; and 2) despite fundamental differences in moment-duration scaling, the slow slip magnitude-frequency distribution is the same as normal earthquakes with a b-value of 1.

  17. Material contrast does not predict earthquake rupture propagation direction

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    2005-01-01

    Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

  18. Magnitude Uncertainty and Ground Motion Simulations of the 1811-1812 New Madrid Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Graves, R. W.; Olsen, K. B.; Boyd, O. S.; Hartzell, S.; Ni, S.; Somerville, P. G.; Williams, R. A.; Zhong, J.

    2011-12-01

    We present a study of a set of three-dimensional earthquake simulation scenarios in the New Madrid Seismic Zone (NMSZ). This is a collaboration among three simulation groups with different numerical modeling approaches and computational capabilities. The study area covers a portion of the Central United States (~400,000 km2) centered on the New Madrid seismic zone, which includes several metropolitan areas such as Memphis, TN and St. Louis, MO. We computed synthetic seismograms to a frequency of 1Hz by using a regional 3D velocity model (Ramirez-Guzman et al., 2010), two different kinematic source generation approaches (Graves et al., 2010; Liu et al., 2006) and one methodology where sources were generated using dynamic rupture simulations (Olsen et al., 2009). The set of 21 hypothetical earthquakes included different magnitudes (Mw 7, 7.6 and 7.7) and epicenters for two faults associated with the seismicity trends in the NMSZ: the Axial (Cottonwood Grove) and the Reelfoot faults. Broad band synthetic seismograms were generated by combining high frequency synthetics computed in a one-dimensional velocity model with the low frequency motions at a crossover frequency of 1 Hz. Our analysis indicates that about 3 to 6 million people living near the fault ruptures would experience Mercalli intensities from VI to VIII if events similar to those of the early nineteenth century occurred today. In addition, the analysis demonstrates the importance of 3D geologic structures, such as the Reelfoot Rift and the Mississippi Embayment, which can channel and focus the radiated wave energy, and rupture directivity effects, which can strongly amplify motions in the forward direction of the ruptures. Both of these effects have a significant impact on the pattern and level of the simulated intensities, which suggests an increased uncertainty in the magnitude estimates of the 1811-1812 sequence based only on historic intensity reports. We conclude that additional constraints such as

  19. Current affairs in earthquake prediction in Japan

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    2015-12-01

    As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.

  20. A Study of Low-Frequency Earthquake Magnitudes in Northern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Chuang, L. Y.; Bostock, M. G.

    2015-12-01

    Tectonic tremor and low frequency earthquakes (LFE) have been extensively studied in recent years in northern Washington and southern Vancouver Island (VI). However, far less attention has been directed to northern VI where the behavior of tremor and LFEs is less well documented. We investigate LFE properties in this latter region by assembling templates using data from the POLARIS-NVI and Sea-JADE experiments. The POLARIS-NVI experiment comprised 27 broadband seismometers arranged along two mutually perpendicular arms with an aperture of ~60 km centered near station WOS (lat. 50.16, lon. -126.57). It recorded two ETS events in June 2006 and May 2007, each with duration less than a week. For these two episodes, we constructed 68 independent, high signal to noise ratio LFE templates representing spatially distinct asperities on the plate boundary in NVI, along with a catalogue of more than 30 thousand detections. A second data set is being prepared for the complementary 2014 Sea-JADE data set. The precisely located LFE templates represent simple direct P-waves and S-waves at many stations thereby enabling magnitude estimation of individual detections. After correcting for radiation pattern, 1-D geometrical spreading, attenuation and free-surface magnification, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single LFE template. LFE magnitudes range up to 2.54, and like southern VI are characterized by high b-values (b~8). In addition, we will quantify LFE moment-duration scaling and compare with southern Vancouver Island where LFE moments appear to be controlled by slip, largely independent of fault area.

  1. Global correlations between maximum magnitudes of subduction zone interface thrust earthquakes and physical parameters of subduction zones

    NASA Astrophysics Data System (ADS)

    Schellart, W. P.; Rawlinson, N.

    2013-12-01

    The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g. Mariana, Scotia). Here we show how such variability might depend on various subduction zone parameters. We present 24 physical parameters that characterize these subduction zones in terms of their geometry, kinematics, geology and dynamics. We have investigated correlations between these parameters and the maximum recorded moment magnitude (MW) for subduction zone segments in the period 1900-June 2012. The investigations were done for one dataset using a geological subduction zone segmentation (44 segments) and for two datasets (rupture zone dataset and epicenter dataset) using a 200 km segmentation (241 segments). All linear correlations for the rupture zone dataset and the epicenter dataset (|R| = 0.00-0.30) and for the geological dataset (|R| = 0.02-0.51) are negligible-low, indicating that even for the highest correlation the best-fit regression line can only explain 26% of the variance. A comparative investigation of the observed ranges of the physical parameters for subduction segments with MW > 8.5 and the observed ranges for all subduction segments gives more useful insight into the spatial distribution of giant subduction thrust earthquakes. For segments with MW > 8.5 distinct (narrow) ranges are observed for several parameters, most notably the trench-normal overriding plate deformation rate (vOPD⊥, i.e. the relative velocity between forearc and stable far-field backarc), trench-normal absolute trench rollback velocity (vT⊥), subduction partitioning ratio (vSP⊥/vS⊥, the fraction of the subduction velocity that is accommodated by subducting plate motion), subduction thrust dip angle (δST), subduction thrust curvature (CST), and trench curvature angle (

  2. Exploring Earthquake Databases for the Creation of Magnitude-Homogeneous Catalogues: Tools for Application on a Regional and Global Scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-06-01

    The creation of a magnitude-homogenised catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenising multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins, and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilise this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonise magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonised into moment-magnitude to form a catalogue of more than 562,840 events. This extended catalogue, whilst not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  3. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  4. Temporal variation of crustal deformation during the days preceding a thrust-type great earthquake — The 1944 Tonankai earthquake of magnitude 8.1, Japan

    NASA Astrophysics Data System (ADS)

    Mogi, Kiyoo

    1984-11-01

    The temporal variation in precursory ground tilt prior to the 1944 Tonankai (Japan) earthquake, which is a great thrust-type earthquake along the Nankai Trough, is discussed using the analysis of data from repeated surveys along short-distance leveling routes. Sato (1970) pointed out that an anomalous tilt occurred one day before the earthquake at Kakegawa near the northern end of the focal region of the earthquake. From the analysis of additional leveling data, Sato's result is re-examined and the temporal change in the ground tilt is deduced for the period of about ten days beginning six days before the earthquake. A remarkable precursory tilt started two or three days before the earthquake. The direction of the precursory tilt was up towards the south (uplift on the southern Nankai Trough side), but the coseismic tilt was up towards the southeast, perpendicular to the strike of the main thrust fault of the Tonankai earthquake. The postseismic tilt was probably opposite of the coseismic tilt. The preseismic tilt is attributed to precursory slip on part of the main fault. If similar precursory deformation occurs before a future earthquake expected to occur in the adjacent Tokai region, the deformation may help predict the time of the Tokai earthquake.

  5. A simple approach to estimate earthquake magnitude from the arrival time of the peak acceleration amplitude

    NASA Astrophysics Data System (ADS)

    Noda, S.; Yamamoto, S.

    2014-12-01

    In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.

  6. The dependence of peak horizontal acceleration on magnitude, distance, and site effects for small-magnitude earthquakes in California and eastern North America

    USGS Publications Warehouse

    Campbell, K.W.

    1989-01-01

    One-hundred and ninety free-field accelerograms recorded on deep soil (>10 m deep) were used to study the near-source scaling characteristics of peak horizontal acceleration for 91 earthquakes (2.5 ??? ML ??? 5.0) located primarily in California. An analysis of residuals based on an additional 171 near-source accelerograms from 75 earthquakes indicated that accelerograms recorded in building basements sited on deep soil have 30 per cent lower acclerations, and that free-field accelerograms recorded on shallow soil (???10 m deep) have 82 per cent higher accelerations than free-field accelerograms recorded on deep soil. An analysis of residuals based on 27 selected strong-motion recordings from 19 earthquakes in Eastern North America indicated that near-source accelerations associated with frequencies less than about 25 Hz are consistent with predictions based on attenuation relationships derived from California. -from Author

  7. Seismic Versus Aseismic Slip and Maximum Induced Earthquake Magnitude in Models of Faults Stimulated by Fluid Injection

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Cappa, F.; Galis, M.; Mai, P. M.

    2015-12-01

    The assessment of earthquake hazard induced by fluid injection or withdrawal could be advanced by understanding what controls the maximum magnitude of induced seismicity (Mmax) and the conditions leading to aseismic instead of seismic slip. This is particularly critical for the viability of renewable energy extraction through engineered geothermal systems, which aim at enhancing permeability through controlled fault slip. Existing empirical relations and models for Mmax lack a link between rupture size and the characteristics of the triggering stress perturbation based on earthquake physics. We aim at filling this gap by extending results on the nucleation and arrest of dynamic rupture. We previously derived theoretical relations based on fracture mechanics between properties of overstressed nucleation regions (size, shape and overstress level), the ability of dynamic ruptures to either stop spontaneously or run away, and the final size of stopping ruptures. We verified these relations by comparison to 3D dynamic rupture simulations under slip-weakening friction and to laboratory experiments of frictional sliding nucleated by localized stresses. Here, we extend these results to the induced seismicity context by considering the effect of pressure perturbations resulting from fluid injection, evaluated by hydromechanical modeling. We address the following question: given the amplitude and spatial extent of a fluid pressure perturbation, background stress and fracture energy on a fault, does a nucleated rupture stop spontaneously at some distance from the pressure perturbation region or does it grow away until it reaches the limits of the fault? We present fracture mechanics predictions of the rupture arrest length in this context, and compare them to results of 3D dynamic rupture simulations. We also conduct a systematic study of the effect of localized fluid pressure perturbations on faults governed by rate-and-state friction. We investigate whether injection

  8. Reprint of: "Demographic factors predict magnitude of conditioned fear".

    PubMed

    Rosenbaum, Blake L; Bui, Eric; Marin, Marie-France; Holt, Daphne J; Lasko, Natasha B; Pitman, Roger K; Orr, Scott P; Milad, Mohammed R

    2015-12-01

    There is substantial variability across individuals in the magnitudes of their skin conductance (SC) responses during the acquisition and extinction of conditioned fear. To manage this variability, subjects may be matched for demographic variables, such as age, gender and education. However, limited data exist addressing how much variability in conditioned SC responses is actually explained by these variables. The present study assessed the influence of age, gender and education on the SC responses of 222 subjects who underwent the same differential conditioning paradigm. The demographic variables were found to predict a small but significant amount of variability in conditioned responding during fear acquisition, but not fear extinction learning or extinction recall. A larger differential change in SC during acquisition was associated with more education. Older participants and women showed smaller differential SC during acquisition. Our findings support the need to consider age, gender and education when studying fear acquisition but not necessarily when examining fear extinction learning and recall. Variability in demographic factors across studies may partially explain the difficulty in reproducing some SC findings. PMID:26608179

  9. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    SciTech Connect

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-06-20

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  10. The 2011 magnitude 9.0 Tohoku-Oki earthquake: mosaicking the megathrust from seconds to centuries.

    PubMed

    Simons, Mark; Minson, Sarah E; Sladen, Anthony; Ortega, Francisco; Jiang, Junle; Owen, Susan E; Meng, Lingsen; Ampuero, Jean-Paul; Wei, Shengji; Chu, Risheng; Helmberger, Donald V; Kanamori, Hiroo; Hetland, Eric; Moore, Angelyn W; Webb, Frank H

    2011-06-17

    Geophysical observations from the 2011 moment magnitude (M(w)) 9.0 Tohoku-Oki, Japan earthquake allow exploration of a rare large event along a subduction megathrust. Models for this event indicate that the distribution of coseismic fault slip exceeded 50 meters in places. Sources of high-frequency seismic waves delineate the edges of the deepest portions of coseismic slip and do not simply correlate with the locations of peak slip. Relative to the M(w) 8.8 2010 Maule, Chile earthquake, the Tohoku-Oki earthquake was deficient in high-frequency seismic radiation--a difference that we attribute to its relatively shallow depth. Estimates of total fault slip and surface secular strain accumulation on millennial time scales suggest the need to consider the potential for a future large earthquake just south of this event. PMID:21596953

  11. The magnitude of events following a strong earthquake: and a pattern recognition approach applied to Italian seismicity

    NASA Astrophysics Data System (ADS)

    Gentili, Stefania; Di Giovambattista, Rita

    2016-04-01

    In this study, we propose an analysis of the earthquake clusters occurred in Italy from 1980 to 2015. In particular, given a strong earthquake, we are interested to identify statistical clues to forecast whether a subsequent strong earthquake will follow. We apply a pattern recognition approach to verify the possible precursors of a following strong earthquake. Part of the analysis is based on the observation of the cluster during the first hours/days after the first large event. The features adopted are, among the others, the number of earthquakes, the radiated energy and the equivalent source area. The other part of the analysis is based on the characteristics of the first strong earthquake, like its magnitude, depth, focal mechanism, the tectonic position of the source zone. The location of the cluster inside the Italia territory is of particular interest. In order to characterize the precursors depending on the cluster type, we used decision trees as classifiers on single precursor separately. The performances of the classification are tested by leave-one-out method. The analysis is done using different time-spans after the first strong earthquake, in order to simulate the increase of information available as time passes during the seismic clusters. The performances are assessed in terms of precision, recall and goodness of the single classifiers and the ROC graph is shown.

  12. Combining earthquakes and GPS data to estimate the probability of future earthquakes with magnitude Mw ≥ 6.0

    NASA Astrophysics Data System (ADS)

    Chen, K.-P.; Tsai, Y.-B.; Chang, W.-Y.

    2013-10-01

    According to Wyss et al. (2000) result indicates that future main earthquakes can be expected along zones characterized by low b values. In this study we combine Benioff strain with global positioning system (GPS) data to estimate the probability of future Mw ≥ 6.0 earthquakes for a grid covering Taiwan. An approach similar to the maximum likelihood method was used to estimate Gutenberg-Richter parameters a and b. The two parameters were then used to estimate the probability of simulating future earthquakes of Mw ≥ 6.0 for each of the 391 grids (grid interval = 0.1°) covering Taiwan. The method shows a high probability of earthquakes in western Taiwan along a zone that extends from Taichung southward to Nantou, Chiayi, Tainan and Kaohsiung. In eastern Taiwan, there also exists a high probability zone from Ilan southward to Hualian and Taitung. These zones are characterized by high earthquake entropy, high maximum shear strain rates, and paths of low b values. A relation between entropy and maximum shear strain rate is also obtained. It indicates that the maximum shear strain rate is about 4.0 times the entropy. The results of this study should be of interest to city planners, especially those concerned with earthquake preparedness. And providing the earthquake insurers to draw up the basic premium.

  13. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  14. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake.

    PubMed

    Bassett, Dan; Sandwell, David T; Fialko, Yuri; Watts, Anthony B

    2016-03-01

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan. PMID:26935698

  15. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Bassett, Dan; Sandwell, David T.; Fialko, Yuri; Watts, Anthony B.

    2016-03-01

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan.

  16. Multidimensional earthquake frequency distributions consistent with self-organization of complex systems: The interdependence of magnitude, interevent time and interevent distance

    NASA Astrophysics Data System (ADS)

    Tzanis, A.; Vallianatos, F.

    2012-04-01

    the G-R law predicts, but also to the interevent time and distance by means of well defined power-laws. We also demonstrate that interevent time and distance are not independent of each other, but also interrelated by means of well defined power-laws. We argue that these relationships are universal and valid for both local and regional tectonic grains and seismicity patterns. Eventually, we argue that the four-dimensional hypercube formed by the joint distribution of earthquake frequency, magnitude, interevent time and interevent distance comprises a generalized distribution of the G-R type which epitomizes the temporal and spatial interdependence of earthquake activity, consistent with expectation for a stationary or evolutionary critical system. Finally, we attempt to discuss the emerging generalized frequency distribution in terms of non-extensive statistical physics. Acknowledgments. This work was partly supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project "Integrated understanding of Seismicity, using innovative methodologies of Fracture Mechanics along with Earthquake and Non-Extensive Statistical Physics - Application to the geodynamic system of the Hellenic Arc - SEISMO FEAR HELLARC".

  17. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  18. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  19. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of

  20. An updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Chen, Kuei-Pao; Tsai, Yi-Ben

    2016-03-01

    The main goal of this study was to develop an updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes that are compatible with the Harvard M w . We hope that such a catalog of earthquakes will provide a fundamental database for definitive studies of the distribution of earthquakes in Taiwan as a function of space, time, and magnitude, as well as for realistic assessments of seismic hazards in Taiwan. In this study, for completeness and consistency, we start with a previously published catalog of earthquakes from 1900 to 2006 with homogenized M w magnitudes. We update the earthquake data through 2014 and supplement the database with 188 additional events for the time period of 1900-1935 that were found in the literature. The additional data resulted in a lower magnitude from M w 5.5-5.0. The broadband-based Harvard M w , United States Geological Survey (USGS) M, and Broadband Array in Taiwan for Seismology (BATS) M w are preferred in this study. Accordingly, we use empirical relationships with the Harvard M w to transform our old converted M w values to new converted M w values and to transform the original BATS M w values to converted BATS M w values. For individual events, the adopted M w is chosen in the following order: Harvard M w > USGS M > converted BATS M w > new converted M w . Finally, we discover that use of the adopted M w removes a data gap at magnitudes greater than or equal to 5.0 in the original catalog during 1985-1991. The new catalog is now complete for M w ≥ 5.0 and significantly improves the quality of data for definitive study of seismicity patterns, as well as for realistic assessment of seismic hazards in Taiwan.

  1. Gambling score in earthquake prediction analysis

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  2. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  3. Time-predictable model applicability for earthquake occurrence in northeast India and vicinity

    NASA Astrophysics Data System (ADS)

    Panthi, A.; Shanker, D.; Singh, H. N.; Kumar, A.; Paudyal, H.

    2011-03-01

    Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986) for the period 1906-2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp) and not on the following mainshock (Mf). This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T) of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.

  4. Spectral P-wave magnitudes, magnitude spectra and other source parameters for the 1990 southern Sudan and the 2005 Lake Tanganyika earthquakes

    NASA Astrophysics Data System (ADS)

    Moussa, Hesham Hussein Mohamed

    2008-10-01

    Teleseismic Broadband seismograms of P-waves from the May 1990 southern Sudan and the December, 2005 Lake Tanganyika earthquakes; the western branch of the East African Rift System at different azimuths have been investigated on the basis of magnitude spectra. The two earthquakes are the largest shocks in the East African Rift System and its extension in southern Sudan. Focal mechanism solutions along with geological evidences suggest that the first event represents a complex style of the deformation at the intersection of the northern branch of the western branch of the East African Rift and Aswa Shear Zone while the second one represents the current tensional stress on the East African Rift. The maximum average spectral magnitude for the first event is determined to be 6.79 at 4 s period compared to 6.33 at 4 s period for the second event. The other source parameters for the two earthquakes were also estimated. The first event had a seismic moment over fourth that of the second one. The two events are radiated from patches of faults having radii of 13.05 and 7.85 km, respectively. The average displacement and stress drop are estimated to be 0.56 m and 1.65 MPa for the first event and 0.43 m and 2.20 MPa for the second one. The source parameters that describe inhomogeneity of the fault are also determined from the magnitude spectra. These additional parameters are complexity, asperity radius, displacements across the asperity and ambient stress drop. Both events produce moderate rupture complexity. Compared to the second event, the first event is characterized by relatively higher complexity, a low average stress drop and a high ambient stress. A reasonable explanation for the variations in these parameters may suggest variation in the strength of the seismogenic fault which provides the relations between the different source parameters. The values of stress drops and the ambient stresses estimated for both events indicate that these earthquakes are of interplate

  5. Four Examples of Short-Term and Imminent Prediction of Earthquakes

    NASA Astrophysics Data System (ADS)

    zeng, zuoxun; Liu, Genshen; Wu, Dabin; Sibgatulin, Victor

    2014-05-01

    We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same

  6. Giant seismites and megablock uplift in the East African Rift: evidence for Late Pleistocene large magnitude earthquakes.

    PubMed

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601

  7. Giant Seismites and Megablock Uplift in the East African Rift: Evidence for Late Pleistocene Large Magnitude Earthquakes

    PubMed Central

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M.

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic ‘megablock complex’ that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601

  8. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  9. 76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey. ACTION: Notice of Meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... Government. The Council shall advise the Director of the U.S. Geological Survey on proposed...

  10. Estimating the magnitude of prediction uncertainties for the APLE model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  11. Dynamic triggering of low magnitude earthquakes in the Middle American Subduction Zone

    NASA Astrophysics Data System (ADS)

    Escudero, C. R.; Velasco, A. A.

    2010-12-01

    We analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. We use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw >7) earthquakes, we first identify local earthquakes that occurred before and after the mainshocks. We then group the local earthquakes within a cluster radius between 75 to 200 km. We obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as local cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Due to lateral variations of the dip along the subducted oceanic plate, we divide the Mexican subduction zone in four segments. We then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes. We identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. We find no depend of seismicity changes due to focal mainshock mechanism.

  12. Statistical Properties of the Immediate Aftershocks of the 15 October 2013 Magnitude 7.1 Earthquake in Bohol, Philippines

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.

    2016-02-01

    The aftershock records of the magnitude 7.1 earthquake that hit the island of Bohol in central Philippines on 15 October 2013 is investigated in the light of previous results for the Philippines using historical earthquakes. Statistics of interevent distances and interevent times between successive aftershocks recorded for the whole month of October 2013 show marked differences from those of historical earthquakes from two Philippine catalogues of varying periods and completeness levels. In particular, the distributions closely follow only the regimes of the historical distributions that were previously attributed to the strong spatio-temporal correlations. The results therefore suggest that these correlated regimes which emerged naturally from the analyses are strongly dominated by the clustering of aftershock events.

  13. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  14. Geotechnical effects of the 2015 magnitude 7.8 Gorkha, Nepal, earthquake and aftershocks

    USGS Publications Warehouse

    Moss, Robb E S; Thompson, Eric; Kieffer, D Scott; Tiwari, Binod; Hashash, Youssef M A; Acharya, Indra; Adhikari, Basanta; Asimaki, Domniki; Clahan, Kevin B.; Collins, Brian D.; Dahal, Sachindra; Jibson, Randall W.; Khadka, Diwakar; Macdonald, Amy; Madugo, Chris L M; Mason, H Benjamin; Pehlivan, Menzer; Rayamajhi, Deepak; Uprety, Sital

    2015-01-01

    This article summarizes the geotechnical effects of the 25 April 2015 M 7.8 Gorkha, Nepal, earthquake and aftershocks, as documented by a reconnaissance team that undertook a broad engineering and scientific assessment of the damage and collected perishable data for future analysis. Brief descriptions are provided of ground shaking, surface fault rupture, landsliding, soil failure, and infrastructure performance. The goal of this reconnaissance effort, led by Geotechnical Extreme Events Reconnaissance, is to learn from earthquakes and mitigate hazards in future earthquakes.

  15. Advance Prediction of the March 11, 2011 Great East Japan Earthquake: A Missed Opportunity for Disaster Preparedness

    NASA Astrophysics Data System (ADS)

    Davis, C. A.; Keilis-Borok, V. I.; Kossobokov, V. G.; Soloviev, A.

    2012-12-01

    There was a missed opportunity for implementing important disaster preparedness measures following an earthquake prediction that was announced as an alarm in mid-2001. This intermediate-term middle-range prediction was the initiation of a chain of alarms that successfully detected the time, region, and magnitude range for the magnitude 9.0 March 11, 2011 Great East Japan Earthquake. The prediction chains were made using an algorithm called M8 and is the latest of many predictions tested worldwide for more than 25 years, the results of which show at least a 70% success rate. The earthquake detection could have been utilized to implement measures and improve earthquake preparedness in advance; unfortunately this was not done, in part due to the predictions' limited distribution and the lack of applying existing methods for using intermediate-term predictions to make decisions for taking action. The resulting earthquake and induced tsunami caused tremendous devastation to north-east Japan. Methods that were known in advance of the predication and further advanced during the prediction timeframe are presented in a scenario describing some possibilities on how the 2001 prediction may have been utilized to reduce significant damage, including damage to the Fukushima nuclear power plant, and to show prudent cost-effective actions can be taken if the prediction certainty is known, but not necessarily high. The purpose of this presentation is to show how the prediction information can be strategically used to enhance disaster preparedness and reduce future impacts from the world's largest earthquakes.

  16. Earthquake prediction: the interaction of public policy and science.

    PubMed Central

    Jones, L M

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

  17. Earthquake prediction: The interaction of public policy and science

    USGS Publications Warehouse

    Jones, L.M.

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

  18. Earthquake ground-motion prediction equations for eastern North America

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    2006-01-01

    New earthquake ground-motion relations for hard-rock and soil sites in eastern North America (ENA), including estimates of their aleatory uncertainty (variability) have been developed based on a stochastic finite-fault model. The model incorporates new information obtained from ENA seismographic data gathered over the past 10 years, including three-component broadband data that provide new information on ENA source and path effects. Our new prediction equations are similar to the previous ground-motion prediction equations of Atkinson and Boore (1995), which were based on a stochastic point-source model. The main difference is that high-frequency amplitudes (f ??? 5 Hz) are less than previously predicted (by about a factor of 1.6 within 100 km), because of a slightly lower average stress parameter (140 bars versus 180 bars) and a steeper near-source attenuation. At frequencies less than 5 Hz, the predicted ground motions from the new equations are generally within 25% of those predicted by Atkinson and Boore (1995). The prediction equations agree well with available ENA ground-motion data as evidenced by near-zero average residuals (within a factor of 1.2) for all frequencies, and the lack of any significant residual trends with distance. However, there is a tendency to positive residuals for moderate events at high frequencies in the distance range from 30 to 100 km (by as much as a factor of 2). This indicates epistemic uncertainty in the prediction model. The positive residuals for moderate events at < 100 km could be eliminated by an increased stress parameter, at the cost of producing negative residuals in other magnitude-distance ranges; adjustment factors to the equations are provided that may be used to model this effect.

  19. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  20. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Rizza, M.; Ritz, J.-F.; Braucher, R.; Vassallo, R.; Prentice, C.; Mahan, S.; McGill, S.; Chauvet, A.; Marco, S.; Todbileg, M.; Demberel, S.; Bourles, D.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans-particularly well preserved in the arid environment of the Gobi region-allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is ~1 mm yr-1 along the WIB and EIB segments and ~0.5 mm yr-1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78-7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of ~2500-5200 yr for past

  1. Incorporating Love- and Rayleigh-wave magnitudes, unequal earthquake and explosion variance assumptions and interstation complexity for improved event screening

    SciTech Connect

    Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia; Shumway, Robert

    2009-01-01

    Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleigh M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the

  2. The marine-geological fingerprint of the 2011 Magnitude 9 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Strasser, M.; Ikehara, K.; Usami, K.; Kanamatsu, T.; McHugh, C. M.

    2015-12-01

    The 2011 Tohoku-oki earthquake was the first great subduction zone earthquake, for which the entire activity was recorded by offshore geophysical, seismological and geodetic instruments and for which direct observation for sediment re-suspension and re-deposition was documented across the entire margin. Furthermore, the resulting tsunami and subsequent tragic incident at Fukushima nuclear power station, has induced short-lived radionuclides which can be used for tracer experiments in the natural offshore sedimentary systems. Here we present a summary on the present knowledge on the 2011 event beds in the offshore environment and integrate data from offshore instruments with sedimentological, geochemical and physical property data on core samples to report various types of event deposits resulting from earthquake-triggered submarine landslides, downslope sediment transport by turbidity currents, surficial sediment remobilization from the agitation and resuspension of unconsolidated surface sediments by the earthquake ground motion, as well as tsunami-induced sediment transport from shallow waters to the deep sea. The rapidly growing data set from offshore Tohoku further allows for discussion about (i) what we can learn from this well-documented event for general submarine paleoseismology aspects and (ii) potential of the Japan Trench to use the geological record of the Japan Trench to reconstruct a long-term history of great subduction zone earthquakes.

  3. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  4. Mwpd: a duration-amplitude procedure for rapid determination of earthquake magnitude and tsunamigenic potential from P waveforms

    NASA Astrophysics Data System (ADS)

    Lomax, Anthony; Michelini, Alberto

    2009-01-01

    We present a duration-amplitude procedure for rapid determination of a moment magnitude, Mwpd, for large earthquakes using P-wave recordings at teleseismic distances. Mwpd can be obtained within 20 min or less after the event origin time as the required data are currently available in near real time. The procedure determines apparent source durations, T0, from high-frequency, P-wave records, and estimates moments through integration of broad-band displacement waveforms over the interval tP to tP + T0, where tP is the P-arrival time. We apply the duration-amplitude methodology to 79 recent, large earthquakes (global centroid-moment-tensor magnitude, MCMTw, 6.6-9.3) with diverse source types. The results show that a scaling of the moment estimates for interplate thrust and possibly tsunami earthquakes is necessary to best match MCMTw. With this scaling, Mwpd matches MCMTw typically within +/-0.2 magnitude units, with a standard deviation of σ = 0.11, equaling or outperforming other approaches to rapid magnitude determination. Furthermore, Mwpd does not exhibit saturation; that is, for the largest events, Mwpd does not systematically underestimate MCMTw. The obtained durations and duration-amplitude moments allow rapid estimation of an energy-to-moment parameter Θ* used for identification of tsunami earthquakes. Our results show that Θ* <= -5.7 is an appropriate cut-off for this identification, but also show that neither Θ* nor Mw is a good indicator for tsunamigenic events in general. For these events, we find that a reliable indicator is simply that the duration T0 is greater than about 50 s. The explicit use of the source duration for integration of displacement seismograms, the moment scaling and other characteristics of the duration-amplitude methodology make it an extension of the widely used, Mwp, rapid magnitude procedure. The need for a moment scaling for interplate thrust and possibly tsunami earthquakes may have important implications for the source

  5. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    Working Group on Northern California Earthquake Potential

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  6. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  7. A Magnitude 7.1 Earthquake in the Tacoma Fault Zone-A Plausible Scenario for the Southern Puget Sound Region, Washington

    USGS Publications Warehouse

    Gomberg, Joan; Sherrod, Brian; Weaver, Craig; Frankel, Art

    2010-01-01

    The U.S. Geological Survey and cooperating scientists have recently assessed the effects of a magnitude 7.1 earthquake on the Tacoma Fault Zone in Pierce County, Washington. A quake of comparable magnitude struck the southern Puget Sound region about 1,100 years ago, and similar earthquakes are almost certain to occur in the future. The region is now home to hundreds of thousands of people, who would be at risk from the shaking, liquefaction, landsliding, and tsunamis caused by such an earthquake. The modeled effects of this scenario earthquake will help emergency planners and residents of the region prepare for future quakes.

  8. New models for frequency content prediction of earthquake records based on Iranian ground-motion data

    NASA Astrophysics Data System (ADS)

    Yaghmaei-Sabegh, Saman

    2015-10-01

    This paper presents the development of new and simple empirical models for frequency content prediction of ground-motion records to resolve the assumed limitations on the useable magnitude range of previous studies. Three period values are used in the analysis for describing the frequency content of earthquake ground-motions named as the average spectral period ( T avg), the mean period ( T m), and the smoothed spectral predominant period ( T 0). The proposed models could predict these scalar indicators as function of magnitude, closest site-to-source distance and local site condition. Three site classes as rock, stiff soil, and soft soil has been considered in the analysis. The results of the proposed relationships have been compared with those of other published models. It has been found that the resulting regression equations can be used to predict scalar frequency content estimators over a wide range of magnitudes including magnitudes below 5.5.

  9. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Prentice, Carol S.; Rizza, M.; Ritz, J.F.; Baucher, R.; Vassallo, R.; Mahan, S.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans—particularly well preserved in the arid environment of the Gobi region—allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is∼1 mm yr–1 along the WIB and EIB segments and∼0.5 mm yr–1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78–7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of∼2500

  10. Strong ground motion prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Z. R.; Tao, X. X.; Cui, A. P.

    2015-09-01

    For regions lack of strong ground motion records, a method is developed to predict strong ground motion by small earthquake records from local broadband digital earthquake networks. Sichuan and Yunnan regions, located in southwestern China, are selected as the targets. Five regional source and crustal medium parameters are inversed by micro-Genetic Algorithm. These parameters are adopted to predict strong ground motion for moment magnitude (Mw) 5.0, 6.0 and 7.0. Strong ground motion data are compared with the results, most of the result pass through ideally the data point plexus, except the case of Mw 7.0 in Sichuan region, which shows an obvious slow attenuation. For further application, this result is adopted in probability seismic hazard assessment (PSHA) and near-field strong ground motion synthesis of the Wenchuan Earthquake.

  11. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  12. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  13. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  14. Earthquake prediction rumors can help in building earthquake awareness: the case of May the 11th 2011 in Rome (Italy)

    NASA Astrophysics Data System (ADS)

    Amato, A.; Arcoraci, L.; Casarotti, E.; Cultrera, G.; Di Stefano, R.; Margheriti, L.; Nostro, C.; Selvaggi, G.; May-11 Team

    2012-04-01

    Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquake prediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquake prediction credibility. Given the echo of this earthquake prediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

  15. Spectra and magnitudes of T-waves from the 1993 earthquake swarm on the Juan de Fuca Ridge

    NASA Astrophysics Data System (ADS)

    Schreiner, Anthony E.; Fox, Christopher G.; Dziak, Robert P.

    1995-01-01

    A swarm of earthquakes on the crest of the Juan de Fuca Ridge was detected in June and July 1993 by a network of hydrophones. The activity migrated 60 km along the crest, suggesting a lateral dike injection and the possibility of a volcanic eruption. Subsequent geologic and oceanographic investigations confirmed that an eruption had taken place. Examination of the individual acoustic arrivals shows changes in the character of the signal that are consistent with an injection of magma. A reduction in the rise time of the wave packet and a proportional increase in high frequency energy was observed and is interpreted to result from a shoaling of the earthquake source region. Second, the source magnitudes were largest at the onset of the swarm and became smaller over time, also consistent with shoaling of the dike. The appearance of the T-wave arrivals changed significantly 5 days after the beginning of the swarm, potentially indicating the onset of a surface eruption.

  16. Imaging of the Rupture Zone of the Magnitude 6.2 Karonga Earthquake of 2009 using Electrical Resistivity Surveys

    NASA Astrophysics Data System (ADS)

    Clappe, B.; Hull, C. D.; Dawson, S.; Johnson, T.; Laó-Dávila, D. A.; Abdelsalam, M. G.; Chindandali, P. R. N.; Nyalugwe, V.; Atekwana, E. A.; Salima, J.

    2015-12-01

    The 2009 Karonga earthquakes occurred in an area where active faults had not previously been known to exist. Over 5000 buildings were destroyed in the area and at least 4 people lost their lives as a direct result of the 19th of December magnitude 6.2 earthquake. The earthquake swarms occurred in the hanging wall of the main Livingstone border fault along segmented, west dipping faults that are synthetic to the Livingstone fault. The faults have a general trend of 290-350 degrees. Electrical resistivity surveys were conducted to investigate the nature of known rupture and seismogenic zones that resulted from the 2009 earthquakes in the Karonga, Malawi area. The goal of this study was to produce high-resolution images below the epicenter and nearby areas of liquefaction to determine changes in conductivity/resistivity signatures in the subsurface. An Iris Syscal Pro was utilized to conduct dipole-dipole resistivity measurements below the surface of soil at farmlands at 6 locations. Each transect was 710 meters long and had an electrode spacing of 10 meters. RES2DINV software was used to create 2-D inversion images of the rupture and seismogenic zones. We were able to observe three distinct geoelectrical layers to the north of the rupture zone and two south of the rupture zone with the discontinuity between the two marked by the location of the surface rupture. The rupture zone is characterized by ~80-meter wide area of enhanced conductivity, 5 m thick underlain by a more resistive layer dipping west. We interpret this to be the result of fine grain sands and silts brought up from depth to near surface as a result of shearing along the fault rupture or liquefaction. Electrical resistivity surveys are valuable, yet under-utilized tools for imaging near-surface effects of earthquakes.

  17. Discrimination of DPRK M5.1 February 12th, 2013 Earthquake as Nuclear Test Using Analysis of Magnitude, Rupture Duration and Ratio of Seismic Energy and Moment

    NASA Astrophysics Data System (ADS)

    Salomo Sianipar, Dimas; Subakti, Hendri; Pribadi, Sugeng

    2015-04-01

    On February 12th, 2013 morning at 02:57 UTC, there had been an earthquake with its epicenter in the region of North Korea precisely around Sungjibaegam Mountains. Monitoring stations of the Preparatory Commission for the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO) and some other seismic network detected this shallow seismic event. Analyzing seismograms recorded after this event can discriminate between a natural earthquake or an explosion. Zhao et. al. (2014) have been successfully discriminate this seismic event of North Korea nuclear test 2013 from ordinary earthquakes based on network P/S spectral ratios using broadband regional seismic data recorded in China, South Korea and Japan. The P/S-type spectral ratios were powerful discriminants to separate explosions from earthquake (Zhao et. al., 2014). Pribadi et. al. (2014) have characterized 27 earthquake-generated tsunamis (tsunamigenic earthquake or tsunami earthquake) from 1991 to 2012 in Indonesia using W-phase inversion analysis, the ratio between the seismic energy (E) and the seismic moment (Mo), the moment magnitude (Mw), the rupture duration (To) and the distance of the hypocenter to the trench. Some of this method was also used by us to characterize the nuclear test earthquake. We discriminate this DPRK M5.1 February 12th, 2013 earthquake from a natural earthquake using analysis magnitude mb, ms and mw, ratio of seismic energy and moment and rupture duration. We used the waveform data of the seismicity on the scope region in radius 5 degrees from the DPRK M5.1 February 12th, 2013 epicenter 41.29, 129.07 (Zhang and Wen, 2013) from 2006 to 2014 with magnitude M ≥ 4.0. We conclude that this earthquake was a shallow seismic event with explosion characteristics and can be discriminate from a natural or tectonic earthquake. Keywords: North Korean nuclear test, magnitude mb, ms, mw, ratio between seismic energy and moment, ruptures duration

  18. Risk and return: evaluating Reverse Tracing of Precursors earthquake predictions

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2010-09-01

    In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

  19. Probability of inducing given-magnitude earthquakes by perturbing finite volumes of rocks

    NASA Astrophysics Data System (ADS)

    Shapiro, Serge A.; Krüger, Oliver S.; Dinske, Carsten

    2013-07-01

    Fluid-induced seismicity results from an activation of finite rock volumes. The finiteness of perturbed volumes influences frequency-magnitude statistics. Previously we observed that induced large-magnitude events at geothermal and hydrocarbon reservoirs are frequently underrepresented in comparison with the Gutenberg-Richter law. This is an indication that the events are more probable on rupture surfaces contained within the stimulated volume. Here we theoretically and numerically analyze this effect. We consider different possible scenarios of event triggering: rupture surfaces located completely within or intersecting only the stimulated volume. We approximate the stimulated volume by an ellipsoid or cuboid and derive the statistics of induced events from the statistics of random thin flat discs modeling rupture surfaces. We derive lower and upper bounds of the probability to induce a given-magnitude event. The bounds depend strongly on the minimum principal axis of the stimulated volume. We compare the bounds with data on seismicity induced by fluid injections in boreholes. Fitting the bounds to the frequency-magnitude distribution provides estimates of a largest expected induced magnitude and a characteristic stress drop, in addition to improved estimates of the Gutenberg-Richter a and b parameters. The observed frequency-magnitude curves seem to follow mainly the lower bound. However, in some case studies there are individual large-magnitude events clearly deviating from this statistic. We propose that such events can be interpreted as triggered ones, in contrast to the absolute majority of the induced events following the lower bound.

  20. Evaluation of the statistical evidence for Characteristic Earthquakes in the frequency-magnitude distributions of Sumatra and other subduction zone regions

    NASA Astrophysics Data System (ADS)

    Naylor, M.; Main, I. G.; Greenhough, J.; Bell, A. F.; McCloskey, J.

    2009-04-01

    The Sumatran Boxing Day earthquake and subsequent large events provide an opportunity to re-evaluate the statistical evidence for characteristic earthquake events in frequency-magnitude distributions. Our aims are to (i) improve intuition regarding the properties of samples drawn from power laws, (ii) illustrate using random samples how appropriate Poisson confidence intervals can both aid the eye and provide an appropriate statistical evaluation of data drawn from power-law distributions, and (iii) apply these confidence intervals to test for evidence of characteristic earthquakes in subduction-zone frequency-magnitude distributions. We find no need for a characteristic model to describe frequency magnitude distributions in any of the investigated subduction zones, including Sumatra, due to an emergent skew in residuals of power law count data at high magnitudes combined with a sample bias for examining large earthquakes as candidate characteristic events.

  1. Predicting earthquakes by analyzing accelerating precursory seismic activity

    USGS Publications Warehouse

    Varnes, D.J.

    1989-01-01

    During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

  2. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  3. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  4. Kinematic earthquake source inversion and tsunami runup prediction with regional geophysical data

    NASA Astrophysics Data System (ADS)

    Melgar, D.; Bock, Y.

    2015-05-01

    Rapid near-source earthquake source modeling relying only on strong motion data is limited by instrumental offsets and magnitude saturation, adversely affecting subsequent tsunami prediction. Seismogeodetic displacement and velocity waveforms estimated from an optimal combination of high-rate GPS and strong motion data overcome these limitations. Supplementing land-based data with offshore wave measurements by seafloor pressure sensors and GPS-equipped buoys can further improve the image of the earthquake source and prediction of tsunami extent, inundation, and runup. We present a kinematic source model obtained from a retrospective real-time analysis of a heterogeneous data set for the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake. Our model is consistent with conceptual models of subduction zones, exhibiting depth dependent behavior that is quantified through frequency domain analysis of slip rate functions. The stress drop distribution is found to be significantly more correlated with aftershock locations and mechanism types when off-shore data are included. The kinematic model parameters are then used as initial conditions in a fully nonlinear tsunami propagation analysis. Notably, we include the horizontal advection of steeply sloping bathymetric features. Comparison with post-event on-land survey measurements demonstrates that the tsunami's inundation and runup are predicted with considerable accuracy, only limited in scale by the resolution of available topography and bathymetry. We conclude that it is possible to produce credible and rapid, kinematic source models and tsunami predictions within minutes of earthquake onset time for near-source coastal regions most susceptible to loss of life and damage to critical infrastructure, regardless of earthquake magnitude.

  5. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes.

    PubMed

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016)10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard. PMID:27176281

  6. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes

    NASA Astrophysics Data System (ADS)

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016), 10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  7. Sun-earth environment study to understand earthquake prediction

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.

    2007-05-01

    Earthquake prediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquake prediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

  8. Shaky grounds of earthquake hazard assessment, forecasting, and prediction

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2012-12-01

    The quality of the fit of a trivial or, conversely, delicately-designed model to the observed natural phenomena is the fundamental pillar stone of any forecasting, including seismic hazard assessment, earthquake forecasting, and prediction. Using precise mathematical and logical systems outside their range of applicability can mislead to scientifically groundless conclusions, which unwise application can be extremely dangerous in assessing expected risk and losses. Are the relationships that are commonly used to assess seismic hazard enough valid to qualify for being useful laws describing earthquake sequences? Seismic evidences accumulated to-date demonstrate clearly that most of the empirical statistical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site. Seismic events, including mega-earthquakes, are clustered displaying behaviors that are far from independent. Their distribution in space is possibly fractal, definitely, far from uniform even in a single fault zone. Evidently, such a situation complicates design of reliable methodologies for earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. The situation is not hopeless due to available geological evidences and deterministic pattern recognition approaches, specifically, when intending to predict predictable, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades.

  9. Satellite Thermal Infrared Stress Field and Earthquake Prediction in Short-term and Imminent

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zhao, X.; Xie, H.; Zeng, Z.

    2007-12-01

    It has been recognized that there is a temperature-increase anomaly before earthquake. Meteorological satellite based thermal infrared (IR) radiation in detecting ground surface temperature, thus anomaly, has its advantages, such as data accuracy, large areal coverage, a large amount of information and capability of capturing the time- space dynamic variation of the temperature - increase before earthquakes (Qiang Zuji et al., 1996). Earthquake precursors should enable a predicator to give the three elements of a future earthquake: location, magnitude and time, (Max Wyss, 1993) in which the method of thermal anomaly detection can provide all three elements. Practice is the only criterion to examine a truth. Chinese scientists in the satellite thermal-infrared earthquake-prediction group led by QIANG Zuji and DIAN Changgong have carried on practicing short and imminent earthquake prediction since 1990. Over the years we have made steady and exciting progress. During the 11 years from 1990 to 2000, we made 119 predictions. of which 58 were valid and 15 were false alarms. The success rate of those predictions increases from 24% during period of 1990-1995, to 46% in 1996, 53% in 1997, 76% in 1998, to 80% in 1999 and 2000, summarized by Wang (2005). The satellite infrared detected thermal stress field is a reflection of the earth's crust stress condition when rock stress increases the spot along the stress to produce micro fissures in rock. Therefore, hot plane and line that have the closet relation with the stress condition and the rock faulted structure can demonstrate the compression stress direction. The thermal stress types have been recognized over years include isolated X shear structure, en echelon, single arm form, the string of beads shape, ~{!J~} type, rotation shear ellipse, and the advancement rotation shear ellipse (Wu Li-xin, Liu Shan-jun, 2006,Qiang Zuji et al 1991,1993,1995,1996,1997,2001)

  10. Current progress in using multiple electromagnetic indicators to determine location, time, and magnitude of earthquakes in California and Peru (Invited)

    NASA Astrophysics Data System (ADS)

    Bleier, T. E.; Dunson, C.; Roth, S.; Heraud, J.; Freund, F. T.; Dahlgren, R.; Bryant, N.; Bambery, R.; Lira, A.

    2010-12-01

    showed similar increases in 30 minute averaged energy excursions, but the 30 minute averages had a disadvantage in that they reduced the signal to noise ratio over the individual pulse counting method. In other electromagnetic monitoring methods, air conductivity instrumentation showed major changes in positive air-borne ions observed near the Alum Rock and Tacna sites, peaking during the 24 hours prior to the earthquake. The use of GOES (geosynchronous) satellite infra red (IR) data showed that an unusual apparent “night time heating” occurred in an extended area within 40+ km. of the Alum Rock site, and this IR signature peaked around the time of the magnetic pulse count peak. The combination of these 3 indicators (magnetic pulse counts, air conductivity, and IR night time heating) may be the start in determining the time (within 1-2 weeks), location (within 20-40km) and magnitude (within +/- 1 increment of Richter magnitude) of earthquake greater than M5.4

  11. Impact of channel-like erosion patterns on the frequency-magnitude distribution of earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-07-01

    Reactive flow at depth (either related to underground activities, like enhancement of hydrocarbon recovery and CO2 storage, or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern. In a fractal structure of rupture process, we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in frequency-magnitude distribution (FMD). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downward translation of FMD: the slope of FMD (b-value) remains unchanged. The parameter-space study shows that the increase of b-value (of 0.08) is statistically significant for optimum characteristics of the erosion pattern with spacing to length ratio of the order of ˜1/40: large-magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  12. Impact of Channel-like Erosion Patterns on the Frequency-Magnitude Distribution of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-12-01

    Reactive flow at depth (either related to underground activities like enhancement of hydrocarbon recovery, CO2 storage or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern (Szymczak & Ladd, JGR, 2009). In a fractal structure of rupture process (Ide & Aochi, JGR, 2005), we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in Frequency-Magnitude Distribution FMD (Rohmer & Aochi, GJI, 2015). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downwards translation of FMD: the slope of FMD (b-value) remains unchanged. An in-depth parametric study was carried out by considering different pattern characteristics: spacing S varying from 0 to 100 and length L from 50 to 800 and fixing the width w=1. The figure shows that there is a region of optimum channels' characteristics for which the b-value of the Gutenberg Richter law is significantly modified with p-value ~10% (corresponding to area with red-coloured boundaries) given spacing to length ratio of the order of ~1/40: large magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. The decrease of the b-value between intact and altered fractures can reach values down to -0.08. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  13. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Small Business Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of small businesses in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each business establishment size category to each Instrumental Intensity level. The analysis concerns the direct effect of the earthquake on small businesses. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by business establishment size.

  14. Scientific investigation of macroscopic phenomena before the 2008 Wenchuan earthquake and its implication to prediction and tectonics

    NASA Astrophysics Data System (ADS)

    Huang, F.; Yang, Y.; Pan, B.

    2013-12-01

    tectonic/faults near the epicentral area. According to the statistic relationship, VI-VII degree intensity in meizoseismal area is equivalent to Magnitude 5. That implied that, generally, macroscopic anomaly easily occurred before earthquakes with magnitude more than 5 in the near epicenteral area. This information can be as pendent clues of earthquake occurrence in a tectonic area. Based on the above scientific investigation and statistic research we recalled other historical earthquakes occurred from 1937 to 1996 in Chinese mainland and got the similar results (Compilation of macroscopic anomalies before earthquakes, published by seismological press, 2009). This can be as one of important basic data to earthquake prediction. This work was supported by NSFC project No. 41274061.

  15. Detection of Subtle Hydromechanical Medium Changes Caused By a Small-Magnitude Earthquake Swarm in NE Brazil

    NASA Astrophysics Data System (ADS)

    D'Hour, V.; Schimmel, M.; Do Nascimento, A. F.; Ferreira, J. M.; Lima Neto, H. C.

    2016-04-01

    Ambient noise correlation analyses are largely used in seismology to map heterogeneities and to monitor the temporal evolution of seismic velocity changes associated mostly with stress field variations and/or fluid movements. Here we analyse a small earthquake swarm related to a main mR 3.7 intraplate earthquake in North-East of Brazil to study the corresponding post-seismic effects on the medium. So far, post-seismic effects have been observed mainly for large magnitude events. In our study, we show that we were able to detect localized structural changes even for a small earthquake swarm in an intraplate setting. Different correlation strategies are presented and their performances are also shown. We compare the classical auto-correlation with and without pre-processing, including 1-bit normalization and spectral whitening, and the phase auto-correlation. The worst results were obtained for the pre-processed data due to the loss of waveform details. The best results were achieved with the phase cross-correlation which is amplitude unbiased and sensitive to small amplitude changes as long as there exist waveform coherence superior to other unrelated signals and noise. The analysis of 6 months of data using phase auto-correlation and cross-correlation resulted in the observation of a progressive medium change after the major recorded event. The progressive medium change is likely related to the swarm activity through opening new path ways for pore fluid diffusion. We further observed for the auto-correlations a lag time frequency-dependent change which likely indicates that the medium change is localized in depth. As expected, the main change is observed along the fault.

  16. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  17. Non-extensive statistical physics applied to heat flow and the earthquake frequency-magnitude distribution in Greece

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2016-08-01

    This study investigates seismicity in Greece and its relation to heat flow, based on the science of complex systems. Greece is characterised by a complex tectonic setting, which is represented mainly by active subduction, lithospheric extension and volcanism. The non-extensive statistical physics formalism is a generalisation of Boltzmann-Gibbs statistical physics and has been successfully used for the analysis of a variety of complex systems, where fractality and long-range interactions are important. Consequently, in this study, the frequency-magnitude distribution analysis was performed in a non-extensive statistical physics context, and the non-extensive parameter, qM, which is related to the frequency-magnitude distribution, was used as an index of the physical state of the studied area. Examination of the spatial distribution of qM revealed its relation to the spatial distribution of seismicity during the period 1976-2009. For focal depths ≤40 km, we observe that strong earthquakes coincide with high qM values. In addition, heat flow anomalies in Greece are known to be strongly related to crustal thickness; a thin crust and significant heat flow anomalies characterise the central Aegean region. Moreover, the data studied indicate that high heat flow is consistent with the absence of strong events and consequently with low qM values (high b-values) in the central Aegean region and around the volcanic arc. However, the eastern part of the volcanic arc exhibits strong earthquakes and high qM values whereas low qM values are found along the North Aegean Trough and southwest of Crete, despite the fact that strong events are present during the period 1976-2009 in both areas.

  18. Analysing earthquake slip models with the spatial prediction comparison test

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Mai, P. Martin; Thingbaijam, Kiran K. S.; Razafindrakoto, Hoby N. T.; Genton, Marc G.

    2015-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (`model') and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  19. Predictability of population displacement after the 2010 Haiti earthquake

    PubMed Central

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-01-01

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people’s movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people’s movements would have become less predictable. Instead, the predictability of people’s trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought. PMID:22711804

  20. Predictability of population displacement after the 2010 Haiti earthquake.

    PubMed

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-07-17

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people's movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people's movements would have become less predictable. Instead, the predictability of people's trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought. PMID:22711804

  1. By How Much Can Physics-Based Earthquake Simulations Reduce the Uncertainties in Ground Motion Predictions?

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Wang, F.

    2014-12-01

    Probabilistic seismic hazard analysis (PSHA) is the scientific basis for many engineering and social applications: performance-based design, seismic retrofitting, resilience engineering, insurance-rate setting, disaster preparation, emergency response, and public education. The uncertainties in PSHA predictions can be expressed as an aleatory variability that describes the randomness of the earthquake system, conditional on a system representation, and an epistemic uncertainty that characterizes errors in the system representation. Standard PSHA models use empirical ground motion prediction equations (GMPEs) that have a high aleatory variability, primarily because they do not account for the effects of crustal heterogeneities, which scatter seismic wavefields and cause local amplifications in strong ground motions that can exceed an order of magnitude. We show how much this variance can be lowered by simulating seismic wave propagation through 3D crustal models derived from waveform tomography. Our basic analysis tool is the new technique of averaging-based factorization (ABF), which uses a well-specified seismological hierarchy to decompose exactly and uniquely the logarithmic excitation functional into a series of uncorrelated terms that include unbiased averages of the site, path, hypocenter, and source-complexity effects (Feng & Jordan, Bull. Seismol. Soc. Am., 2014, doi:10.1785/0120130263). We apply ABF to characterize the differences in ground motion predictions between the standard GMPEs employed by the National Seismic Hazard Maps and the simulation-based CyberShake hazard model of the Southern California Earthquake Center. The ABF analysis indicates that, at low seismic frequencies (< 1 Hz), CyberShake site and path effects unexplained by the GMPEs account 40-50% of total residual variance. Therefore, accurate earthquake simulations have the potential for reducing the aleatory variance of the strong-motion predictions by about a factor of two, which would

  2. 75 FR 63854 - National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 2-day meeting on November 3 and 4, 2010. The... the Director of the U.S. Geological Survey (USGS) on proposed earthquake predictions, on...

  3. Seismic properties of the Longmen Shan complex: Implications for the moment magnitude of the great 2008 Wenchuan earthquake in China

    NASA Astrophysics Data System (ADS)

    Sun, Shengsi; Ji, Shaocheng; Wang, Qian; Wang, Hongcai; Long, Changxing; Salisbury, Matthew

    2012-09-01

    The 12 May 2008 Wenchuan earthquake is the largest active tectonic event reported to date in Sichuan (China). We have experimentally calibrated, up to 800 MPa, seismic and elastic properties of 12 representative samples from the Longmen Shan complex in which this great earthquake took place and its coseismic ruptures nucleated and propagated. Most of the samples show little Vp or Vs anisotropy at pressures above the microcrack-closure pressure (Pc = 200-300 MPa), and so the variation of anisotropy with pressure provides important hints for the preferred orientation of microcracks in the nonlinear poroelastic regime below Pc. Geothermal and rheological profiles indicate that the focal depth (~ 19 km) corresponds to the base of the schizosphere, below which the Longmen Shan complex switches from the brittle to ductile behavior. The investigation reveals that the crust of the Longmen Shan range consists of 4 layers from the surface to the Moho: Layer 1: Vp < 4.88 km/s (0-3 km thick, sedimentary rocks such as limestone, sandstone, conglomerate, and mudstone); Layer 2: Vp = 5.95-6.25 km/s (25-28 km thick, felsic rocks); Layer 3: Vp = 6.55 km/s (10 km thick, 67.5% felsic and 32.5% mafic rocks); and Layer 4: Vp = 6.90 km/s (8 km thick, 20.0% felsic and 80.0% mafic rocks). The average Vp/Vs ratio of 1.71 or Poisson's ratio of 0.24 calculated for the whole crust is consistent with the results measured using teleseismic receiver function techniques. This study also offers necessary information for broadband simulations of strong ground motions in the assessment and forecast of earthquake hazards in the region. Furthermore, the study, which yields a moment magnitude of 7.9-8.0 given the variation in the dip of the coseismic ruptures and the uncertainty in the depth to which the coseismic rupture may propagate downwards below the depth of the mainshock hypocenter, presents the first accurate quantification of the 2008 Wenchuan earthquake's size.

  4. Strong motion PGA prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Zhengru; Tao, Xiaxin; Cui, Anping

    2016-05-01

    For regions without enough strong ground motion records, a seismology-based method is adopted to predict motion PGA (peak ground acceleration) values on rock sites with parameters from small earthquake data, recorded by regional broadband digital monitoring networks. Sichuan and Yunnan regions in southwestern China are selected for this case study. Five regional parameters of source spectrum and attenuation are acquired from a joint inversion by the micro-genetic algorithm. PGAs are predicted for earthquakes with moment magnitude (Mw) 5.0, 6.0, and 7.0 respectively and a series of distances. The result is compared with limited regional strong motion data in the corresponding interval Mw ± 0.5. Most of the results ideally pass through the data clusters, except the case of Mw7.0 in the Sichuan region, which shows an obvious slow attenuation due to a lack of observed data from larger earthquakes (Mw ≥ 7.0). For further application, the parameters are adopted in strong motion synthesis at two near-fault stations during the great Wenchuan Earthquake M8.0 in 2008.

  5. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    USGS Publications Warehouse

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-01-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  6. Magnitudes and locations of the 1811-1812 New Madrid, Missouri, and the 1886 Charleston, South Carolina, earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Hopper, M.G.

    2004-01-01

    We estimate locations and moment magnitudes M and their uncertainties for the three largest events in the 1811-1812 sequence near New Madrid, Missouri, and for the 1 September 1886 event near Charleston, South Carolina. The intensity magnitude M1, our preferred estimate of M, is 7.6 for the 16 December 1811 event that occurred in the New Madrid seismic zone (NMSZ) on the Bootheel lineament or on the Blytheville seismic zone. M1, is 7.5 for the 23 January 1812 event for a location on the New Madrid north zone of the NMSZ and 7.8 for the 7 February 1812 event that occurred on the Reelfoot blind thrust of the NMSZ. Our preferred locations for these events are located on those NMSZ segments preferred by Johnston and Schweig (1996). Our estimates of M are 0.1-0.4 M units less than those of Johnston (1996b) and 0.3-0.5 M units greater than those of Hough et al. (2000). M1 is 6.9 for the 1 September 1886 event for a location at the Summerville-Middleton Place cluster of recent small earthquakes located about 30 km northwest of Charleston.

  7. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Labor Market Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of economic Super Sectors in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each Super Sector to each Instrumental Intensity level. The analysis concerns the direct effect of the scenario earthquake on economic sectors and provides a baseline for the indirect and interactive analysis of an input-output model of the regional economy. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by the North American Industry Classification System. According to the analysis results, nearly 225,000 business

  8. Implications for prediction and hazard assessment from the 2004 Parkfield earthquake.

    PubMed

    Bakun, W H; Aagaard, B; Dost, B; Ellsworth, W L; Hardebeck, J L; Harris, R A; Ji, C; Johnston, M J S; Langbein, J; Lienkaemper, J J; Michael, A J; Murray, J R; Nadeau, R M; Reasenberg, P A; Reichle, M S; Roeloffs, E A; Shakal, A; Simpson, R W; Waldhauser, F

    2005-10-13

    Obtaining high-quality measurements close to a large earthquake is not easy: one has to be in the right place at the right time with the right instruments. Such a convergence happened, for the first time, when the 28 September 2004 Parkfield, California, earthquake occurred on the San Andreas fault in the middle of a dense network of instruments designed to record it. The resulting data reveal aspects of the earthquake process never before seen. Here we show what these data, when combined with data from earlier Parkfield earthquakes, tell us about earthquake physics and earthquake prediction. The 2004 Parkfield earthquake, with its lack of obvious precursors, demonstrates that reliable short-term earthquake prediction still is not achievable. To reduce the societal impact of earthquakes now, we should focus on developing the next generation of models that can provide better predictions of the strength and location of damaging ground shaking. PMID:16222291

  9. Evidence for the recurrence of large-magnitude earthquakes along the Makran coast of Iran and Pakistan

    USGS Publications Warehouse

    Page, W.D.; Alt, J.N.; Cluff, L.S.; Plafker, G.

    1979-01-01

    The presence of raised beaches and marine terraces along the Makran coast indicates episodic uplift of the continental margin resulting from large-magnitude earthquakes. The uplift occurs as incremental steps similar in height to the 1-3 m of measured uplift resulting from the November 28, 1945 (M 8.3) earthquake at Pasni and Ormara, Pakistan. The data support an E-W-trending, active subduction zone off the Makran coast. The raised beaches and wave-cut terraces along the Makran coast are extensive with some terraces 1-2 km wide, 10-15 m long and up to 500 m in elevation. The terraces are generally capped with shelly sandstones 0.5-5 m thick. Wave-cut cliffs, notches, and associated boulder breccia and swash troughs are locally preserved. Raised Holocene accretion beaches, lagoonal deposits, and tombolos are found up to 10 m in elevation. The number and elevation of raised wave-cut terraces along the Makran coast increase eastward from one at Jask, the entrance to the Persian Gulf, at a few meters elevation, to nine at Konarak, 250 km to the east. Multiple terraces are found on the prominent headlands as far east as Karachi. The wave-cut terraces are locally tilted and cut by faults with a few meters of displacement. Long-term, average rates of uplift were calculated from present elevation, estimated elevation at time of deposition, and 14C and U-Th dates obtained on shells. Uplift rates in centimeters per year at various locations from west to east are as follows: Jask, 0 (post-Sangamon); Konarak, 0.031-0.2 (Holocene), 0.01 (post-Sangamon); Ormara 0.2 (Holocene). ?? 1979.

  10. Earthquake Scaling and Development of Ground Motion Prediction for Earthquake Hazard Mitigation in Taiwan

    NASA Astrophysics Data System (ADS)

    Ma, K.; Yen, Y.

    2011-12-01

    For earthquake hazard mitigation toward risk management, integration study from development of source model to ground motion prediction is crucial. The simulation for high frequency component ( > 1 Hz) of strong ground motions in the near field was not well resolved due to the insufficient resolution in velocity structure. Using the small events as Green's functions (i.e. empirical Green's function (EGF) method) can resolve the problem of lack of precise velocity structure to replace the path effect evaluation. If the EGF is not available, a stochastic Green's function (SGF) method can be employed. Through characterizing the slip models derived from the waveform inversion, we directly extract the parameters needed for the ground motion prediction in the EGF method or the SGF method. The slip models had been investigated from Taiwan dense strong motion and global teleseismic data. In addition, the low frequency ( < 1 Hz) can obtained numerically by the Frequency-Wavenumber (FK) method. Thus, broadband frequency strong ground motion can be calculated by a hybrid method that combining a deterministic FK method for the low frequency simulation and the EGF or SGF method for high frequency simulation. Characterizing the definitive source parameters from the empirical scaling study can provide directly to the ground motion simulation. To give the ground motion prediction for a scenario earthquake, we compiled the earthquake scaling relationship from the inverted finite-fault models of moderate to large earthquakes in Taiwan. The studies show the significant involvement of the seismogenic depth to the development of rupture width. In addition to that, several earthquakes from blind fault show distinct large stress drop, which yield regional high PGA. According to the developing scaling relationship and the possible high stress drops for earthquake from blind faults, we further deploy the hybrid method mentioned above to give the simulation of the strong motion in

  11. Under the hood of the earthquake machine: toward predictive modeling of the seismic cycle.

    PubMed

    Barbot, Sylvain; Lapusta, Nadia; Avouac, Jean-Philippe

    2012-05-11

    Advances in observational, laboratory, and modeling techniques open the way to the development of physical models of the seismic cycle with potentially predictive power. To explore that possibility, we developed an integrative and fully dynamic model of the Parkfield segment of the San Andreas Fault. The model succeeds in reproducing a realistic earthquake sequence of irregular moment magnitude (M(w)) 6.0 main shocks--including events similar to the ones in 1966 and 2004--and provides an excellent match for the detailed interseismic, coseismic, and postseismic observations collected along this fault during the most recent earthquake cycle. Such calibrated physical models provide new ways to assess seismic hazards and forecast seismicity response to perturbations of natural or anthropogenic origins. PMID:22582259

  12. The detection and location of low magnitude earthquakes in northern Norway using multi-channel waveform correlation at regional distances

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Bøttger Sørensen, Mathilde; Harris, David B.; Ringdal, Frode

    2007-03-01

    A fortuitous sequence of closely spaced earthquakes in the Rana region of northern Norway, during 2005, has provided an ideal natural laboratory for investigating event detectability using waveform correlation over networks and arrays at regional distances. A small number of events between magnitude 2.0 and 3.5 were recorded with a high SNR by the Fennoscandian IMS seismic arrays at distances over 600 km and three of these events, including the largest on 24 June, displayed remarkable waveform similarity even at relatively high frequencies. In an effort to detect occurrences of smaller earthquakes in the immediate geographical vicinity of the 24 June event, a multi-channel correlation detector for the NORSAR array was run for the whole calender year 2005 using the signal from the master event as a template. A total of 32 detections were made and all but 2 of these coincided with independent correlation detections using the other Nordic IMS array stations; very few correspond to signals detectable using traditional energy detectors. Permanent and temporary stations of the Norwegian National Seismic Network (NNSN) at far closer epicentral distances have confirmed that all but one of the correlation detections at NORSAR in fact correspond to real events. The closest stations at distances of approximately 10 km can confirm that the smallest of these events have magnitudes down to 0.5 which represents a detection threshold reduction of over 1.5 for the large-aperture NORSAR array and over 1.0 for the almost equidistant regional ARCES array. The incompleteness of the local network recordings precludes a comprehensive double-difference location for the full set of events. However, stable double-difference relative locations can be obtained for eight of the events using only the Lg phase recorded at the array stations. All events appear to be separated by less than 0.5 km. Clear peaks were observed in the NORSAR correlation coefficient traces during the coda of some of the

  13. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    NASA Astrophysics Data System (ADS)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  14. Systematic correlations of the earthquake frequency-magnitude distribution with the deformation and mechanical regimes in the Taiwan orogen

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Ling; Hung, Shu-Huei; Jiang, Juen-Shi; Chiao, Ling-Yun

    2016-05-01

    We investigate the correlation of the earthquake frequency-magnitude distribution with the style of faulting and stress in Taiwan. The b values estimated for three types of focal mechanisms show significant differences with the lowest for thrust, intermediate for strike slip, and highest value for normal events, consistent with those found in global and other regional seismicity. Lateral distribution of the b values shows a good correlation with the predominant faulting mechanism, crustal deformation, and stress patterns. The two N-S striking thrust zones in western and eastern Taiwan under the larger E-W shortening and differential stress yield the lower b values than those in the in-between mountain ranges subject to the smaller extensional stress and dominated by strike slip and normal faults. The termination of the monotonically decreasing b value with depth at ~15-20 km corroborates its inverse relationship with stress and the existence of the brittle-plastic transition in the weak middle crust beneath the Taiwan orogen.

  15. Spatial variations in the frequency-magnitude distribution of earthquakes at Soufriere Hills Volcano, Montserrat, West Indies

    USGS Publications Warehouse

    Power, J.A.; Wyss, M.; Latchman, J.L.

    1998-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is determined as a function of space beneath Soufriere Hills Volcano, Montserrat, from data recorded between August 1, 1995 and March 31, 1996. A volume of anomalously high b-values (b > 3.0) with a 1.5 km radius is imaged at depths of 0 and 1.5 km beneath English's Crater and Chance's Peak. This high b-value anomaly extends southwest to Gage's Soufriere. At depths greater than 2.5 km volumes of comparatively low b-values (b-1) are found beneath St. George's Hill, Windy Hill, and below 2.5 km depth and to the south of English's Crater. We speculate the depth of high b-value anomalies under volcanoes may be a function of silica content, modified by some additional factors, with the most siliceous having these volumes that are highly fractured or contain high pore pressure at the shallowest depths. Copyright 1998 by the American Geophysical Union.

  16. The Ordered Network Structure of M≥8 Earthquakes and its Prediction for the Ordered Pair Great Earthquakes in Mainland China

    NASA Astrophysics Data System (ADS)

    Men, Ke-Pei; Zhao, Kai

    2014-04-01

    According to the statistical data, a total of 23 M ≥ 8 earthquakes occurred in Mainland China from 1303 to 2012. The seismic activity of M ≥ 8 earthquakes has showed an obvious self-organized orderliness. It should be remarked especially that there were three ordered pairs of M ≥8 earthquakes occurred in West China during 1902 - 2001, of which the time interval in each pair of two earthquakes was four years. This is a unique and rare earthquake example in earthquake history of China and the world. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered analysis with complex network technology, this paper focuses on the summary of the ordered network structure of M ≥ 8 earthquakes, supplements new information, constructs and further optimizes the 2D- and 3D-ordered network structure of M ≥ 8 earthquakes to make prediction research. At last, a new prediction opinion is presented that the future ordered pair of great earthquakes will probably occur around 2022 and 2026 in Mainland China.

  17. Attentional blink magnitude is predicted by the ability to keep irrelevant material out of working memory.

    PubMed

    Arnell, Karen M; Stubitz, Shawn M

    2010-09-01

    Participants have difficulty in reporting the second of two masked targets if the second target is presented within 500 ms of the first target-an attentional blink (AB). Individual participants differ in the magnitude of their AB. The present study employed an individual differences design and two visual working memory tasks to examine whether visual working memory capacity and/or the ability to exclude irrelevant information from visual working memory (working memory filtering efficiency) could predict individual differences in the AB. Visual working memory capacity was positively related to filtering efficiency, but did not predict AB magnitude. However, the degree to which irrelevant stimuli were admitted into visual working memory (i.e., poor filtering efficiency) was positively correlated with AB magnitude over and above visual working memory capacity. Good filtering efficiency may benefit the AB by not allowing irrelevant RSVP distractors to gain access to working memory. PMID:19937451

  18. Magnitude correlations in global seismicity

    SciTech Connect

    Sarlis, N. V.

    2011-08-15

    By employing natural time analysis, we analyze the worldwide seismicity and study the existence of correlations between earthquake magnitudes. We find that global seismicity exhibits nontrivial magnitude correlations for earthquake magnitudes greater than M{sub w}6.5.

  19. Possibility of Earthquake-prediction by analyzing VLF signals

    NASA Astrophysics Data System (ADS)

    Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta

    2016-07-01

    Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

  20. A Survey Study of Significent Achievements Accomplished By Snon-mainstreamt Seismologists In ¸ Earthquake Monitoring and Prediction Science In China Since 1970

    NASA Astrophysics Data System (ADS)

    Chen, I. W.

    Since 1990, the author, a British U Chinese consultant, has studied and followed the significant achievements accomplished by Snon-mainstreamT seismologists in ¸ earthquake prediction in China since 1970. The scientific systems used include: (1) Astronomy-seismology: The relativity between special positions of certain planets (es- pecially the moon and another planet) relative to the seismic active areas on the earth and the occurrence time of major damaging earthquakes in these areas on the earth, the relativity between the dates of magnetic storms on the earth and the occurrence dates of major damaging earthquakes on the earth, as well as certain cycle relativity be- tween the occurrence dates of major historical earthquakes occurring in relative areas on the earth. (2) Precursor analysis: With own-developed sensors and instruments, nu- merous precursors were recorded. In most cases, these precursors can not be detected by conventional seismological sensors/instruments. Through exploratory practice and theoretical studies, various relativity between different characteristics of the precur- sors, and the occurrence time, epicenter location and magnitude of the developing earthquake were identified and can be calculated. Through approaches quite differ- ent to conventional methods, successful predictions of quite a large number of earth- quakes have been achieved, including earthquakes that occurred in mainland China, Taiwan and Japan. (3) Earthquake imminent affirmative confirmation: With a special instrument, the background of imminent state of earthquakes can be identified, and a universal earthquake imminent signal is further identified. It can be used to confirm if an earlier predicted earthquake is entering its imminent state, if it will definitely occur, or if an earlier prediction can be released. (4) 5km, 7km and 10km depth com- parative terrestrial stress survey measurement to identify earthquake focus zones in surveyed areas. Then, with an eight

  1. The rat nucleus accumbens is involved in guiding of instrumental responses by stimuli predicting reward magnitude.

    PubMed

    Giertler, Christian; Bohn, Ines; Hauber, Wolfgang

    2003-10-01

    The present study examined the involvement of N-methyl-d-aspartate (NMDA), alpha-amino-3-hydroxy-5-methyl-4-isoxazolpropionate/kainate (AMPA/KA) and dopamine receptors in the nucleus accumbens (ACB) in influencing reaction times of instrumental responses by the expectancy of reward. A simple reaction time task demanding conditioned lever release was used in which the upcoming reward magnitude was signalled in advance by discriminative cues. After training, in control rats with vehicle infusions (0.5 micro L) into the ACB, reaction times of responses were significantly shorter to the discriminative cue predictive of high reward magnitude. Indirect stimulation of dopamine receptors in the ACB by d-amphetamine (20 micro g/0.5 micro L) decreased reaction times, impaired their guidance by cue-associated reward magnitudes and reduced the accuracy of task performance. Blockade of AMPA/KA receptors in the ACB by 6-cyano-7-nitroquino-xaline-2,3-dione (0.75 and 2.5 micro g/0.5 micro L) or NMDA receptors by d(-)-2-amino-5-phosphonopentanoic acid (5 micro g/0.5 micro L) produced a general increase in reaction times, but left guidance of reaction times by cue-associated reward magnitudes unaffected. Thus, stimulation of intra-ACB ionotropic glutamate receptors is critically involved in modulating the speed of instrumental responding to cues predictive for reward magnitude, but is not required for intact performance of previously learned instrumental behaviour. PMID:14622231

  2. Variability of sporadic E-layer semi transparency (foEs-fbEs)with magnitude and distance from earthquake epicenters to vertical sounding stations

    NASA Astrophysics Data System (ADS)

    Liperovskaya, E. V.; Pokhotelov, O. A.; Hobara, Y.; Parrot, M.

    Variations of the Es-layer semi transparency co-efficient were analyzed for more than 100 earthquakes with magnitudes M > 4 and depths h < 100 km. Data of mid latitude vertical sounding stations (Kokubunji, Akita, and Yam-agawa) have been used for several decades before and after earthquake occurrences. The semi-transparency coefficient of Es-layer X = (foEs - fbEs)/fbEs can characterize, for thin layers, the presence of small scale plasma turbulence. It is shown that the turbulence level decreases by ~ 10% during three days before earthquakes probably due to the heating of the atmosphere. On the contrary, the turbulence level increases by the same value from one to three days after the shocks. For earthquakes with magnitudes M > 5 the effect exists at distances up to 300 km from the epicenters. The effect could also exist for weak (M ~ 4) and shallow (depth < 50 km) earthquakes at a distance smaller than 200 km from the epicenters.

  3. Ground motion prediction and earthquake scenarios in the volcanic region of Mt. Etna (Southern Italy

    NASA Astrophysics Data System (ADS)

    Langer, Horst; Tusa, Giuseppina; Luciano, Scarfi; Azzaro, Raffaela

    2013-04-01

    One of the principal issues in the assessment of seismic hazard is the prediction of relevant ground motion parameters, e. g., peak ground acceleration, radiated seismic energy, response spectra, at some distance from the source. Here we first present ground motion prediction equations (GMPE) for horizontal components for the area of Mt. Etna and adjacent zones. Our analysis is based on 4878 three component seismograms related to 129 seismic events with local magnitudes ranging from 3.0 to 4.8, hypocentral distances up to 200 km, and focal depth shallower than 30 km. Accounting for the specific seismotectonic and geological conditions of the considered area we have divided our data set into three sub-groups: (i) Shallow Mt. Etna Events (SEE), i.e., typically volcano-tectonic events in the area of Mt. Etna having a focal depth less than 5 km; (ii) Deep Mt. Etna Events (DEE), i.e., events in the volcanic region, but with a depth greater than 5 km; (iii) Extra Mt. Etna Events (EEE), i.e., purely tectonic events falling outside the area of Mt. Etna. The predicted PGAs for the SEE are lower than those predicted for the DEE and the EEE, reflecting their lower high-frequency energy content. We explain this observation as due to the lower stress drops. The attenuation relationships are compared to the ones most commonly used, such as by Sabetta and Pugliese (1987)for Italy, or Ambraseys et al. (1996) for Europe. Whereas our GMPEs are based on small earthquakes, the magnitudes covered by the two above mentioned attenuation relationships regard moderate to large magnitudes (up to 6.8 and 7.9, respectively). We show that the extrapolation of our GMPEs to magnitues beyond the range covered by the data is misleading; at the same time also the afore mentioned relationships fail to predict ground motion parameters for our data set. Despite of these discrepancies, we can exploit our data for setting up scenarios for strong earthquakes for which no instrumental recordings are

  4. Earthquake!

    ERIC Educational Resources Information Center

    Markle, Sandra

    1987-01-01

    A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)

  5. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...

  6. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  7. The 26 May 2006 magnitude 6.4 Yogyakarta earthquake south of Mt. Merapi volcano: Did lahar deposits amplify ground shaking and thus lead to the disaster?

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Wang, R.; Luehr, B.-G.; Wassermann, J.; Behr, Y.; Parolai, S.; Anggraini, A.; Günther, E.; Sobiesiak, M.; Grosser, H.; Wetzel, H.-U.; Milkereit, C.; Sri Brotopuspito, P. J. K.; Harjadi, P.; Zschau, J.

    2008-05-01

    Indonesia is repeatedly unsettled by severe volcano- and earthquake-related disasters, which are geologically coupled to the 5-7 cm/a tectonic convergence of the Australian plate beneath the Sunda Plate. On Saturday, 26 May 2006, the southern coast of central Java was struck by an earthquake at 2254 UTC in the Sultanate Yogyakarta. Although the magnitude reached only M w = 6.4, it left more than 6,000 fatalities and up to 1,000,000 homeless. The main disaster area was south of Mt. Merapi Volcano, located within a narrow topographic and structural depression along the Opak River. The earthquake disaster area within the depression is underlain by thick volcaniclastic deposits commonly derived in the form of lahars from Mt. Merapi Volcano, which had a major influence leading to the disaster. In order to more precisely understand this earthquake and its consequences, a 3-month aftershock measurement campaign was performed from May to August 2006. We here present the first location results, which suggest that the Yogyakarta earthquake occurred at 10-20 km distance east of the disaster area, outside of the topographic depression. Using simple model calculations taking material heterogeneity into account we illustrate how soft volcaniclastic deposits may locally amplify ground shaking at distance. As the high degree of observed damage may have been augmented by the seismic response of the volcaniclastic Mt. Merapi deposits, this work implies that the volcano had an indirect effect on the level of earthquake destruction.

  8. Large magnitude (M > 7.5) offshore earthquakes in 2012: few examples of absent or little tsunamigenesis, with implications for tsunami early warning

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano

    2013-04-01

    We take into account some examples of offshore earthquakes occurred worldwide in year 2012 that were characterised by a "large" magnitude (Mw equal or larger than 7.5) but which produced no or little tsunami effects. Here, "little" is intended as "lower than expected on the basis of the parent earthquake magnitude". The examples we analyse include three earthquakes occurred along the Pacific coasts of Central America (20 March, Mw=7.8, Mexico; 5 September, Mw=7.6, Costa Rica; 7 November, Mw=7.5, Mexico), the Mw=7.6 and Mw=7.7 earthquakes occurred respectively on 31 August and 28 October offshore Philippines and offshore Alaska, and the two Indian Ocean earthquakes registered on a single day (11 April) and characterised by Mw=8.6 and Mw=8.2. For each event, we try to face the problem related to its tsunamigenic potential from two different perspectives. The first can be considered purely scientific and coincides with the question: why was the ensuing tsunami so weak? The answer can be related partly to the particular tectonic setting in the source area, partly to the particular position of the source with respect to the coastline, and finally to the focal mechanism of the earthquake and to the slip distribution on the ruptured fault. The first two pieces of information are available soon after the earthquake occurrence, while the third requires time periods in the order of tens of minutes. The second perspective is more "operational" and coincides with the tsunami early warning perspective, for which the question is: will the earthquake generate a significant tsunami and if so, where will it strike? The Indian Ocean events of 11 April 2012 are perfect examples of the fact that the information on the earthquake magnitude and position alone may not be sufficient to produce reliable tsunami warnings. We emphasise that it is of utmost importance that the focal mechanism determination is obtained in the future much more quickly than it is at present and that this

  9. Maximum magnitude in the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry

    2014-05-01

    Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the

  10. Raising the science awareness of first year undergraduate students via an earthquake prediction seminar

    NASA Astrophysics Data System (ADS)

    Gilstrap, T. D.

    2011-12-01

    The public is fascinated with and fearful of natural hazards such as earthquakes. After every major earthquake there is a surge of interest in earthquake science and earthquake prediction. Yet many people do not understand the challenges of earthquake prediction and the need to fund earthquake research. An earthquake prediction seminar is offered to first year undergraduate students to improve their understanding of why earthquakes happen, how earthquake research is done and more specifically why it is so challenging to issue short-term earthquake prediction. Some of these students may become scientists but most will not. For the majority this is an opportunity to learn how science research works and how it is related to policy and society. The seminar is seven weeks long, two hours per week and has been taught every year for the last four years. The material is presented conceptually; there is very little quantitative work involved. The class starts with a field trip to the Randolph College Seismic Station where students learn about seismographs and the different types of seismic waves. Students are then provided with basic background on earthquakes. They learn how to pick arrival times using real seismograms, how to use earthquake catalogues, how to predict the arrival of an earthquake wave at any location on Earth. Next they learn about long, intermediate, short and real time earthquake prediction. Discussions are an essential part of the seminar. Students are challenged to draw their own conclusions on the pros and cons of earthquake prediction. Time is designated to discuss the political and economic impact of earthquake prediction. At the end of the seven weeks students are required to write a paper and discuss the need for earthquake prediction. The class is not focused on the science but rather the links between the science issues and their economical and political impact. Weekly homework assignments are used to aid and assess students' learning. Pre and

  11. Network of seismo-geochemical monitoring observatories for earthquake prediction research in India

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Hirok; Barman, Chiranjib; Iyengar, A.; Ghose, Debasis; Sen, Prasanta; Sinha, Bikash

    2013-08-01

    Present paper deals with a brief review of the research carried out to develop multi-parametric gas-geochemical monitoring facilities dedicated to earthquake prediction research in India by installing a network of seismo-geochemical monitoring observatories at different regions of the country. In an attempt to detect earthquake precursors, the concentrations of helium, argon, nitrogen, methane, radon-222 (222Rn), polonium-218 (218Po), and polonium-214 (214Po) emanating from hydrothermal systems are monitored continuously and round the clock at these observatories. In this paper, we make a cross correlation study of a number of geochemical anomalies recorded at these observatories. With the data received from each of the above observatories we attempt to make a time series analysis to relate magnitude and epicentral distance locations through statistical methods, empirical formulations that relate the area of influence to earthquake scale. Application of the linear and nonlinear statistical techniques in the recorded geochemical data sets reveal a clear signature of long-range correlation in the data sets.

  12. Diking-induced moderate-magnitude earthquakes on a youthful rift border fault: The 2002 Nyiragongo-Kalehe sequence, D.R. Congo

    NASA Astrophysics Data System (ADS)

    Wauthier, C.; Smets, B.; Keir, D.

    2015-12-01

    On 24 October 2002, Mw 6.2 earthquake occurred in the central part of the Lake Kivu basin, Western Branch of the East African Rift. This is the largest event recorded in the Lake Kivu area since 1900. An integrated analysis of radar interferometry (InSAR), seismic and geological data, demonstrates that the earthquake occurred due to normal-slip motion on a major preexisting east-dipping rift border fault. A Coulomb stress analysis suggests that diking events, such as the January 2002 dike intrusion, could promote faulting on the western border faults of the rift in the central part of Lake Kivu. We thus interpret that dike-induced stress changes can cause moderate to large-magnitude earthquakes on major border faults during continental rifting. Continental extension processes appear complex in the Lake Kivu basin, requiring the use of a hybrid model of strain accommodation and partitioning in the East African Rift.

  13. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  14. Prediction model of earthquake with the identification of earthquake source polarity mechanism through the focal classification using ANFIS and PCA technique

    NASA Astrophysics Data System (ADS)

    Setyonegoro, W.

    2016-05-01

    Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.

  15. The 1170 and 1202 CE Dead Sea Rift earthquakes and long-term magnitude distribution of the Dead Sea Fault zone

    USGS Publications Warehouse

    Hough, S.E.; Avni, R.

    2009-01-01

    In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.

  16. Earthquake forecasting and warning

    SciTech Connect

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  17. Current Status of a Near-Real Time High Rate (1Hz) GPS Processing applied to a Network located in Spain and surrounding for Quick Earthquake Magnitude Determination

    NASA Astrophysics Data System (ADS)

    Mendoza, Leonor; Garate, Jorge; Davila, Jose Martin; Becker, Matthias; Drescher, Ralf

    2010-05-01

    The earthquake true size and tsunami potential can be determined using GPS data up to only 15 minutes after earthquake initiation, by tracking the mean displacement of Earth's surface associated with the arrival of seismic waves (Blewitt, 2006). We are using this approach to get quick assessments of earthquakes' magnitudes. Data files with 1 Hz data sample, of Continuous GPS (CGPS) networks, located in Spain and surrounding, are analyzed with Bernese 5.0 software. Relative movements are computed to detect horizontal, but also vertical, surface's deformations due to large magnitude earthquakes. Accuracy is expected at millimetres level. Moreover, CGPS 1 Hz data is less sensitive to noise contamination than seismic data (Larson et al, 2003). Some UNIX scripts built in Perl, make Bernese to run batch processes every 15 minutes: CGPS network stations' data files are downloaded, in order to be analyzed automatically. The process output is a new set of coordinates for each station, which is compared with those we have got before, looking for deformations in near real time. The poster shows the implementation and the present status of the analysis. We present the chosen network results, and some time series examples in the three components are also shown.

  18. Improved instrumental magnitude prediction expected from version 2 of the NASA SKY2000 master star catalog

    NASA Technical Reports Server (NTRS)

    Sande, C. B.; Brasoveanu, D.; Miller, A. C.; Home, A. T.; Tracewell, D. A.; Warren, W. H., Jr.

    1998-01-01

    The SKY2000 Master Star Catalog (MC), Version 2 and its predecessors have been designed to provide the basic astronomical input data needed for satellite acquisition and attitude determination on NASA spacecraft. Stellar positions and proper motions are the primary MC data required for operations support followed closely by the stellar brightness observed in various standard astronomical passbands. The instrumental red-magnitude prediction subsystem (REDMAG) in the MMSCAT software package computes the expected instrumental color index (CI) [sensor color correction] from an observed astronomical stellar magnitude in the MC and the characteristics of the stellar spectrum, astronomical passband, and sensor sensitivity curve. The computation is more error prone the greater the mismatch of the sensor sensitivity curve characteristics and those of the observed astronomical passbands. This paper presents the preliminary performance analysis of a typical red-sensitive CCDST during acquisition of sensor data from the two Ball CT-601 ST's onboard the Rossi X-Ray Timing Explorer (RXTE). A comparison is made of relative star positions measured in the ST FOV coordinate system with the expected results computed from the recently released Tycho Catalogue. The comparison is repeated for a group of observed stars with nearby, bright neighbors in order to determine the tracker behavior in the presence of an interfering, near neighbor (NN). The results of this analysis will be used to help define a new photoelectric photometric instrumental sensor magnitude system (S) that is based on several thousand bright star magnitudes observed with the PXTE ST's. This new system will be implemented in Version 2 of the SKY2000 MC to provide improved predicted magnitudes in the mission run catalogs.

  19. Comparison of strong-motion spectra with teleseismic spectra for three magnitude 8 subduction-zone earthquakes

    NASA Astrophysics Data System (ADS)

    Houston, Heidi; Kanamori, Hiroo

    1990-08-01

    A comparison of strong-motion spectra and teleseismic spectra was made for three Mw 7.8 to 8.0 earthquakes: the 1985 Michoacan (Mexico) earthquake, the 1985 Valparaiso (Chile) earthquake, and the 1983 Akita-Oki (Japan) earthquake. The decay of spectral amplitude with the distance from the station was determined, considering different measures of distance from a finite fault, and it was found to be different for these three events. The results can be used to establish empirical relations between the observed spectra and the half-space responses depending on the distance and the site condition, making it possible to estimate strong motions from source spectra determined from teleseismic records.

  20. Estimation of Maximum Magnitude (c-value) and its Certainty for Modified Gutenberg-Richter Formulas, Based on Historical and Instrumental Japanese Intraplate Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Kumamoto, T.; Hagiwara, Y.

    2002-12-01

    A-, b-, and c-values for the original Gutenberg-Richter formula (GR) and modified GR formulas (Utsu, 1978) were estimated using a dataset of combined historical (1595-1925 A.D.) and instrumental (1926-2000) Japanese earthquake data for 18 intraplate seismo-tectonic provinces depicted on a new tectonic map of Japan (Kakimi et al., 2002). The theoretical relationships between the b-values of the original and modified GR formulas, and the certainty of b- and c-values, were evaluated with respect to the dataset. The GR formula generally used for earthquake magnitude and frequency relationships demonstrates that earthquake frequency in each magnitude class is about ten times that of the next highest class. This is expressed as: log n(M) = a-bM, where n(M) is the number of earthquakes of a given magnitude M, and a- and b-values are constants representing the level of seismicity and the ratio of small to large events, respectively. In this formula, the expected maximum magnitude (c-value) in a given earthquake catalog is calculated using one more assumption: a maximum-magnitude earthquake should occur only once in a given period, because the c-value is not a characteristic parameter of the original GR formula. Utsu (1978) proposed that the GR formula be modified by introducing the c-value, and presented two formulas: a truncated GR formula (TGR), expressed as log n(M) = a - bM (M is equal to or smaller than c); n(M) = 0 (M is greater than c); and a modified GR formula (MGR), expressed as log n(M) = a - bM + log (c-M) (M is smaller than c); n(M) = 0 (M is equal to or greater than c). Calculations for 18 Japanese seismo-tectonic provinces revealed the following relation: b(GR) > b(TGR) > b(MGR). This is a theoretical relationship, which means that b- and c-values are relative parameters within one formula, and that comparison of b- and c-values between different GR formulas is meaningless. Furthermore, the distribution of b- and c-values in 18 intraplate seismo

  1. Brief Communication: On the source characteristics and impacts of the magnitude 7.2 Bohol earthquake, Philippines

    NASA Astrophysics Data System (ADS)

    Lagmay, A. M. F.; Eco, R.

    2014-10-01

    A devastating earthquake struck Bohol, Philippines, on 15 October 2013. The earthquake originated at 12 km depth from an unmapped reverse fault, which manifested on the surface for several kilometers and with maximum vertical displacement of 3 m. The earthquake resulted in 222 fatalities with damage to infrastructure estimated at USD 52.06 million. Widespread landslides and sinkholes formed in the predominantly limestone region during the earthquake. These remain a significant threat to communities as destabilized hillside slopes, landslide-dammed rivers and incipient sinkholes are still vulnerable to collapse, triggered possibly by aftershocks and heavy rains in the upcoming months of November and December. The most recent fatal temblor originated from a previously unmapped fault, herein referred to as the Inabanga Fault. Like the hidden or previously unmapped faults responsible for the 2012 Negros and 2013 Bohol earthquakes, there may be more unidentified faults that need to be mapped through field and geophysical methods. This is necessary to mitigate the possible damaging effects of future earthquakes in the Philippines.

  2. The spatiotemporal analysis of the minimum magnitude of completeness Mc and the Gutenberg-Richter law b-value parameter using the earthquake catalog of Greece

    NASA Astrophysics Data System (ADS)

    Popandopoulos, G. A.; Baskoutas, I.; Chatziioannou, E.

    2016-03-01

    Spatiotemporal mapping the minimum magnitude of completeness Mc and b-value of the Gutenberg-Richter law is conducted for the earthquake catalog data of Greece. The data were recorded by the seismic network of the Institute of Geodynamics of the National Observatory of Athens (GINOA) in 1970-2010 and by the Hellenic Unified Seismic Network (HUSN) in 2011-2014. It is shown that with the beginning of the measurements at HUSN, the number of the recorded events more than quintupled. The magnitude of completeness Mc of the earthquake catalog for 1970-2010 varies within 2.7 to 3.5, whereas starting from April 2011 it decreases to 1.5-1.8 in the central part of the region and fluctuates around the average of 2.0 in the study region overall. The magnitude of completeness Mc and b-value for the catalogs of the earthquakes recorded by the old (GINOA) and new (HUSN) seismic networks are compared. It is hypothesized that the magnitude of completeness Mc may affect the b-value estimates. The spatial distribution of the b-value determined from the HUSN catalog data generally agrees with the main geotectonic features of the studied territory. It is shown that the b-value is below 1 in the zones of compression and is larger than or equal to 1 in the zones dominated by extension. The established depth dependence of the b-value is pretty much consistent with the hypothesis of a brittle-ductile transition zone existing in the Earth's crust. It is assumed that the source depth of a strong earthquake can probably be estimated from the depth distribution of the b-value, which can be used for seismic hazard assessment.

  3. Update of the Graizer-Kalkan ground-motion prediction equations for shallow crustal continental earthquakes

    USGS Publications Warehouse

    Graizer, Vladimir; Kalkan, Erol

    2015-01-01

    A ground-motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration and 5-percent damped pseudo spectral acceleration response ordinates of maximum horizontal component of randomly oriented ground motions was developed by Graizer and Kalkan (2007, 2009) to be used for seismic hazard analyses and engineering applications. This GMPE was derived from the greatly expanded Next Generation of Attenuation (NGA)-West1 database. In this study, Graizer and Kalkan’s GMPE is revised to include (1) an anelastic attenuation term as a function of quality factor (Q0) in order to capture regional differences in large-distance attenuation and (2) a new frequency-dependent sedimentary-basin scaling term as a function of depth to the 1.5-km/s shear-wave velocity isosurface to improve ground-motion predictions for sites on deep sedimentary basins. The new model (GK15), developed to be simple, is applicable to the western United States and other regions with shallow continental crust in active tectonic environments and may be used for earthquakes with moment magnitudes 5.0–8.0, distances 0–250 km, average shear-wave velocities 200–1,300 m/s, and spectral periods 0.01–5 s. Directivity effects are not explicitly modeled but are included through the variability of the data. Our aleatory variability model captures inter-event variability, which decreases with magnitude and increases with distance. The mixed-effects residuals analysis shows that the GK15 reveals no trend with respect to the independent parameters. The GK15 is a significant improvement over Graizer and Kalkan (2007, 2009), and provides a demonstrable, reliable description of ground-motion amplitudes recorded from shallow crustal earthquakes in active tectonic regions over a wide range of magnitudes, distances, and site conditions.

  4. Rapid decision tool to predict earthquake destruction in Sumatra by using first motion study

    NASA Astrophysics Data System (ADS)

    Bhakta, Shardul Sanjay

    The main idea of this project is to build an interactive and smart Geographic Information system tool which can help predict intensity of real time earthquakes in Sumatra Island of Indonesia. The tool has an underlying intelligence to predict the intensity of an earthquake depending on analysis of similar earthquakes in the past in that specific region. Whenever an earthquake takes place in Sumatra, a First Motion Study is conducted; this decides its type, depth, latitude and longitude. When the user inputs this information into the input string, the tool will try to find similar earthquakes with a similar First Motion Survey and depth. It will do a survey of similar earthquakes and predict if this real time earthquake can be disastrous or not. This tool has been developed in JAVA. I have used MOJO (Map Objects JAVA Objects) to show map of Indonesia and earthquake locations in the form of points. ESRI has created MOJO which is a set of JAVA API's. The Indonesia map, earthquake location points and its co-relation was all designed using MOJO. MOJO is a powerful tool which made it easy to design the tool. This tool is easy to use and the user has to input only a few parameters for the end result. I hope this tool justifies its use in prediction of earthquakes and help save lives in Sumatra.

  5. Earthquake prediction research at the Seismological Laboratory, California Institute of Technology

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquake prediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below. 

  6. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    SciTech Connect

    Takeda, Fumihide; Takeo, Makoto

    2004-12-09

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area.

  7. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  8. The universe at faint magnitudes. I - Models for the galaxy and the predicted star counts

    NASA Astrophysics Data System (ADS)

    Bahcall, J. N.; Soneira, R. M.

    1980-09-01

    A detailed model is constructed for the disk and spheroid components of the Galaxy from which the distribution of visible stars and mass in the Galaxy is calculated. The application of star counts to the determination of galactic structure parameters is demonstrated. The possibility of detecting a halo component with the aid of star counts is also investigated quantitatively. The stellar luminosity functions and scale heights are determined from observations in the solar neighborhood. The global distribution of matter is assumed, based on studies of other galaxies, to be an exponential disk plus a de Vaucouleurs spheroid. The spheroid luminosity function is found to have the same shape as the disk luminosity function over the range of absolute magnitudes (+4 to + 12) that contributes significantly to the star counts for mV ≤ 30. The density of spheroid stars in the solar neighborhood is 1/800 of the value for the disk. The star counts calculated using the density variation of a de Vaucouleurs spheroid are consistent with the available data; the counts predicted with the aid of a Hubble law are inconsistent with observations at more than the two-sigma level of significance. The variations of the calculated star densities with apparent magnitude, latitude, and longitude agree well with the available star count data for the observationally well studied range of 4 ≲ mV ≲ 22. The calculated (B - V) color distributions are also in good agreement with existing data. The color data also indicate that QSOs comprise only a few percent of the total number of stellar objects to mV = 22 (mB = 22.5). The spheroid component is found to be approximately spherical. The scale lengths of the Galaxy model and computed total luminosity and M/L ratios for the disk and spheroid are in agreement with observations of other Sbc galaxies. Illustrative Fig. and a table of interesting characteristics (such as the mass and luminosity contained within various radii and the escape velocity

  9. Validation of a ground motion synthesis and prediction methodology for the 1988, M=6.0, Saguenay Earthquake

    SciTech Connect

    Hutchings, L.; Jarpe, S.; Kasameyer, P.; Foxall, W.

    1998-01-01

    We model the 1988, M=6.0, Saguenay earthquake. We utilize an approach that has been developed to predict strong ground motion. this approach involves developing a set of rupture scenarios based upon bounds on rupture parameters. rupture parameters include rupture geometry, hypocenter, rupture roughness, rupture velocity, healing velocity (rise times), slip distribution, asperity size and location, and slip vector. Scenario here refers to specific values of these parameters for an hypothesized earthquake. Synthetic strong ground motion are then generated for each rupture scenario. A sufficient number of scenarios are run to span the variability in strong ground motion due to the source uncertainties. By having a suite of rupture scenarios of hazardous earthquakes for a fixed magnitude and identifying the hazard to the site from the one standard deviation value of engineering parameters we have introduced a probabilistic component to the deterministic hazard calculation, For this study we developed bounds on rupture scenarios from previous research on this earthquake. The time history closest to the observed ground motion was selected as a model for the Saguenay earthquake.

  10. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when

  11. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  12. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), III. Re-computed MS and mb, proxy MW, final magnitude composition and completeness assessment

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James

    2015-02-01

    This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the

  13. Historical precipitation predictably alters the shape and magnitude of microbial functional response to soil moisture.

    PubMed

    Averill, Colin; Waring, Bonnie G; Hawkes, Christine V

    2016-05-01

    Soil moisture constrains the activity of decomposer soil microorganisms, and in turn the rate at which soil carbon returns to the atmosphere. While increases in soil moisture are generally associated with increased microbial activity, historical climate may constrain current microbial responses to moisture. However, it is not known if variation in the shape and magnitude of microbial functional responses to soil moisture can be predicted from historical climate at regional scales. To address this problem, we measured soil enzyme activity at 12 sites across a broad climate gradient spanning 442-887 mm mean annual precipitation. Measurements were made eight times over 21 months to maximize sampling during different moisture conditions. We then fit saturating functions of enzyme activity to soil moisture and extracted half saturation and maximum activity parameter values from model fits. We found that 50% of the variation in maximum activity parameters across sites could be predicted by 30-year mean annual precipitation, an indicator of historical climate, and that the effect is independent of variation in temperature, soil texture, or soil carbon concentration. Based on this finding, we suggest that variation in the shape and magnitude of soil microbial response to soil moisture due to historical climate may be remarkably predictable at regional scales, and this approach may extend to other systems. If historical contingencies on microbial activities prove to be persistent in the face of environmental change, this approach also provides a framework for incorporating historical climate effects into biogeochemical models simulating future global change scenarios. PMID:26748720

  14. Fluid-faulting evolution in high definition: Connecting fault structure and frequency-magnitude variations during the 2014 Long Valley Caldera, California, earthquake swarm

    NASA Astrophysics Data System (ADS)

    Shelly, David R.; Ellsworth, William L.; Hill, David P.

    2016-03-01

    An extended earthquake swarm occurred beneath southeastern Long Valley Caldera between May and November 2014, culminating in three magnitude 3.5 earthquakes and 1145 cataloged events on 26 September alone. The swarm produced the most prolific seismicity in the caldera since a major unrest episode in 1997-1998. To gain insight into the physics controlling swarm evolution, we used large-scale cross correlation between waveforms of cataloged earthquakes and continuous data, producing precise locations for 8494 events, more than 2.5 times the routine catalog. We also estimated magnitudes for 18,634 events (~5.5 times the routine catalog), using a principal component fit to measure waveform amplitudes relative to cataloged events. This expanded and relocated catalog reveals multiple episodes of pronounced hypocenter expansion and migration on a collection of neighboring faults. Given the rapid migration and alignment of hypocenters on narrow faults, we infer that activity was initiated and sustained by an evolving fluid pressure transient with a low-viscosity fluid, likely composed primarily of water and CO2 exsolved from underlying magma. Although both updip and downdip migration were observed within the swarm, downdip activity ceased shortly after activation, while updip activity persisted for weeks at moderate levels. Strongly migrating, single-fault episodes within the larger swarm exhibited a higher proportion of larger earthquakes (lower Gutenberg-Richter b value), which may have been facilitated by fluid pressure confined in two dimensions within the fault zone. In contrast, the later swarm activity occurred on an increasingly diffuse collection of smaller faults, with a much higher b value.

  15. Results and lessons of the 10 years experiment of large earthquake prediction made in advance with a lead time months using Reverse Tracing of Precursors (RTP) (Invited)

    NASA Astrophysics Data System (ADS)

    Shebalin, P.

    2013-12-01

    The experiment in prospective documented earthquake prediction using the algorithm Reverse Tracing of Precursors (RTP) has been started in June 2003. The algorithm is based on the analysis of a set of intermediate-term precursors in an area of a short term long-range activation of seismicity, detected by earthquake chains. Earthquake chains are clusters of moderate-size earthquakes which extend over large distances and are formed by statistically rare pairs of events that are close in space and time. We put predictions on record at http://www.rtptest.org (with a restricted access to current predictions). Predictions are not deterministic: they are expected to be true with some probability exceeding 50%. During the period June 2003 to August 2013, 30 predictions were put on record, five of them were extended in modified area to longer intervals than standard 9 months. Those prolongations are not considered as separate predictions because they largely intersect in time and space with corresponding initial ones. Six earthquakes out of ten have been predicted (Hokkaido earthquake, September, 25, 2003, Mw=8.3; San-Simeon earthquake in California, 25 December, 2003, M=6.5, earthquake in the sea near Sendai, Japan, August 16, 2005, Mw=7.2; Simushir earthquake, Kuril Islands, November 15, 2006, Mw=8.3, Andreanoff Islands earthquake, December 19, 2007, M=7.2; Tohoku earthquake, March 11, 2011, M=9.1, Kurile Islands earthquake, April 19, 2013, M=7.2 and Okhotsk sea earthquake, May 24, 2013, M=8.3). Four earthquakes, in 2009 to 2012, have been missed. One of them is the Aquila earthquake in Central Italy, April 6th, 2009, M=6.3. We might suppose that this miss was due to an evident increase of the magnitude cut-off in the PDE catalogue we use for RTP in Italy. We have tried to complement retrospectively the catalogue with data from CSEM. However, the alarm prior to the Aquila earthquake still was not diagnosed. For three other earthquakes the situation is different. We

  16. Repeated large-magnitude earthquakes in a tectonically active, low-strain continental interior: The northern Tien Shan, Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Landgraf, A.; Dzhumabaeva, A.; Abdrakhmatov, K. E.; Strecker, M. R.; Macaulay, E. A.; Arrowsmith, Jr.; Sudhaus, H.; Preusser, F.; Rugel, G.; Merchel, S.

    2016-05-01

    The northern Tien Shan of Kyrgyzstan and Kazakhstan has been affected by a series of major earthquakes in the late 19th and early 20th centuries. To assess the significance of such a pulse of strain release in a continental interior, it is important to analyze and quantify strain release over multiple time scales. We have undertaken paleoseismological investigations at two geomorphically distinct sites (Panfilovkoe and Rot Front) near the Kyrgyz capital Bishkek. Although located near the historic epicenters, both sites were not affected by these earthquakes. Trenching was accompanied by dating stratigraphy and offset surfaces using luminescence, radiocarbon, and 10Be terrestrial cosmogenic nuclide methods. At Rot Front, trenching of a small scarp did not reveal evidence for surface rupture during the last 5000 years. The scarp rather resembles an extensive debris-flow lobe. At Panfilovkoe, we estimate a Late Pleistocene minimum slip rate of 0.2 ± 0.1 mm/a, averaged over at least two, probably three earthquake cycles. Dip-slip reverse motion along segmented, moderately steep faults resulted in hanging wall collapse scarps during different events. The most recent earthquake occurred around 3.6 ± 1.3 kyr ago (1σ), with dip-slip offsets between 1.2 and 1.4 m. We calculate a probabilistic paleomagnitude to be between 6.7 and 7.2, which is in agreement with regional data from the Kyrgyz range. The morphotectonic signals in the northern Tien Shan are a prime example of deformation in a tectonically active intracontinental mountain belt and as such can help understand the longer-term coevolution of topography and seismogenic processes in similar structural settings worldwide.

  17. Determination of focal mechanisms of intermediate-magnitude earthquakes in Mexico, based on Greens functions calculated for a 3D Earth model

    NASA Astrophysics Data System (ADS)

    Rodrigo Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala

    2015-04-01

    One important ingredient in the study of the complex active tectonics in Mexico is the analysis of earthquake focal mechanisms, or the seismic moment tensor. They can be determined trough the calculation of Green functions and subsequent inversion for moment-tensor parameters. However, this calculation is gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes excite waves of longer periods that interact weakly with laterally heterogeneities in the crust. For these earthquakes, using 1D velocity models to compute the Greens fucntions works well. The opposite occurs for smaller and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle and requires more specific or regional 3D models. In this study, we calculate Greens functions for earthquakes in Mexico using a laterally heterogeneous seismic wave speed model, comprised of mantle model S362ANI (Kustowski et al 2008) and crustal model CRUST 2.0 (Bassin et al 1990). Subsequently, we invert the observed seismograms for the seismic moment tensor using a method developed by Liu et al (2004) an implemented by Óscar de La Vega (2014) for earthquakes in Mexico. By following a brute force approach, in which we include all observed Rayleigh and Love waves of the Mexican National Seismic Network (Servicio Sismológico Naciona, SSN), we obtain reliable focal mechanisms for events that excite a considerable amount of low frequency waves (Mw > 4.8). However, we are not able to consistently estimate focal mechanisms for smaller events using this method, due to high noise levels in many of the records. Excluding the noisy records, or noisy parts of the records manually, requires interactive edition of the data, using an efficient tool for the editing. Therefore, we developed a graphical user interface (GUI), based on python and the python library ObsPy, that allows the edition of observed and

  18. InSAR analysis of the 2008 M 4.7 Reno-Mogul, Nevada earthquake: Evidence for co-seismic and post-seismic ground deformation associated with smaller magnitude earthquakes in the Basin and Range

    NASA Astrophysics Data System (ADS)

    Bell, J. W.; Amelung, F.

    2009-12-01

    On April 25, 2008, an M 4.7 earthquake occurred at Mogul, 10 km west of Reno, Nevada, following a two month long swarm of hundreds of small (M 1-4) events. Despite the lack of visible ground rupture, InSAR analysis of pre- and post-earthquake data reveals evidence for co-seismic and post-seismic ground deformation in the epicentral area and provides insight into contemporary tectonic processes in the western Basin and Range. Descending and ascending Envisat data acquired 1 month after the earthquake show 4-6 cm of LOS change within a 5-10 km asymmetric radius of the epicenter, delineating a maximum ground deformation pattern aligning with the seismically well-defined N35W rupture plane. The lobate deformation pattern of the LOS changes together with inverse modeling of the unwrapped interferograms (University of Miami Geodmod code) indicates that the earthquake was a right-lateral strike-slip event, consistent with the instrumental focal mechanism and a robust aftershock pattern. Further InSAR analysis of data acquired in July and August, 2008 indicate that post-seismic deformation continued for several months after the main event with as much as 2 cm of additional LOS change occurring, consistent with the continuation of intense swarm activity through August, 2008 (UNR Seismological Laboratory) and with post-seismic motion measured by GPS (Blewitt et al., 2008). Comparison of post-earthquake InSAR data indicates that no additional post-seismic deformation has occurred since August 2008. Pre-seismic GPS movement reported by Blewitt et al. (2008) was not found in the InSAR analysis, likely owing to the small-scale pre-seismic displacements. These results provide new insights into tectonic processes associated with smaller magnitude earthquakes that otherwise have no visible co-seismic deformation. Our previous InSAR studies indicate that InSAR-detectable earthquakes typically have magnitudes of M >5.0 using conventional C-band SAR data. The Reno-Mogul earthquake is

  19. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  20. 2011 Van earthquake (Mw=7.2) aftershocks using the source spectra an approach to real-time estimation of moment magnitude

    NASA Astrophysics Data System (ADS)

    Meral Ozel, N.; Kusmezer, A.

    2012-04-01

    The Converging Grid Search (CGS) algorithm was tested on broadband waveforms data from large aftershocks of the October 23, Van earthquake with the hypocentral distances within 0-300 km over a magnitude range of 4.0≤M≤5.6.Observed displacement spectra were virtually well adapted to the Brune's source model in the whole frequency range for many waveforms.The estimated Mw solutions were compared to global CMT catalogue solutions, and were seen to be in good agreement. To estimate Mw from a shear-wave displacement spectrum, an automatic routine named as CGS was applied to attempt to test and develop a method for stable moment magnitude estimation to be used as a real-time operation.The spectra were corrected for average an elastic attenuation and geometrical spreading factors and then were scaled to compute moment at the long period asymptote where the spectral plateau for 0 Hz is flat.For this aim, an automatic procedure was utilized: 1)calculating the displacement spectra for vertical components at a given station, 2)estimating corner frequency and seismic moment using CGS which is based on minimizing the differences between observed and synthetic source spectra, 3)calculating moment magnitude from seismic moment for each station separately, and then are averaged to give the mean values of each event. The best fitting iteration of these parameters was obtained after a few seconds. The noise spectrum was also computed to suggest a comparison between signals to noise ratio before performing the inversion.Weak events with low SNR were excluded from the computations. The method examined on the Van earthquake aftershock dataset proved that it is applicable to have stable and reliable estimates of magnitude for the routine processing within a few seconds from the initial P wave detection though the location estimation is necessary.This allows a fast determination of Mw magnitude and assist to measure physical quantities of the source available for the real time

  1. Predicting the Timing and Magnitude of Tropical Mosquito Population Peaks for Maximizing Control Efficiency

    PubMed Central

    Yang, Guo-Jing; Brook, Barry W.; Bradshaw, Corey J. A.

    2009-01-01

    The transmission of mosquito-borne diseases is strongly linked to the abundance of the host vector. Identifying the environmental and biological precursors which herald the onset of peaks in mosquito abundance would give health and land-use managers the capacity to predict the timing and distribution of the most efficient and cost-effective mosquito control. We analysed a 15-year time series of monthly abundance of Aedes vigilax, a tropical mosquito species from northern Australia, to determine periodicity and drivers of population peaks (high-density outbreaks). Two sets of density-dependent models were used to examine the correlation between mosquito abundance peaks and the environmental drivers of peaks or troughs (low-density periods). The seasonal peaks of reproduction (r) and abundance () occur at the beginning of September and early November, respectively. The combination of low mosquito abundance and a low frequency of a high tide exceeding 7 m in the previous low-abundance (trough) period were the most parsimonious predictors of a peak's magnitude, with this model explaining over 50% of the deviance in . Model weights, estimated using AICc, were also relatively high for those including monthly maximum tide height, monthly accumulated tide height or total rainfall per month in the trough, with high values in the trough correlating negatively with the onset of a high-abundance peak. These findings illustrate that basic environmental monitoring data can be coupled with relatively simple density feedback models to predict the timing and magnitude of mosquito abundance peaks. Decision-makers can use these methods to determine optimal levels of control (i.e., least-cost measures yielding the largest decline in mosquito abundance) and so reduce the risk of disease outbreaks in human populations. PMID:19238191

  2. Response facilitation: implications for perceptual theory, psychotherapy, neurophysiology, and earthquake prediction.

    PubMed

    Medici, R G; Frey, A H; Frey, D

    1985-04-01

    There have been numerous naturalistic observations and anecdotal reports of abnormal animal behavior prior to earthquakes. Basic physiological and behavioral data have been brought together with geophysical data to develop a specific explanation to account for how animals could perceive and respond to precursors of impending earthquakes. The behavior predicted provides a reasonable approximation to the reported abnormal behaviors; that is, the behavior appears to be partly reflexive and partly operant. It can best be described as agitated stereotypic behavior. The explanation formulated has substantial implications for perceptual theory, psychotherapy, and neurophysiology, as well as for earthquake prediction. Testable predictions for biology, psychology, and geophysics can be derived from the explanation. PMID:3997385

  3. Earthquake mechanism and predictability shown by a laboratory fault

    USGS Publications Warehouse

    King, C.-Y.

    1994-01-01

    Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.

  4. A Survey Study of Significent Achievements Accomplished By Snon-mainstreamt Seismologists In ¸ Earthquake Monitoring and Prediction Science In China Since 1970

    NASA Astrophysics Data System (ADS)

    Chen, I. W.

    Since 1990, the author, a British U Chinese consultant, has studied and followed the significant achievements accomplished by Snon-mainstreamT seismologists in & cedil;earthquake prediction in China since 1970. The scientific systems used include: (1) Astronomy-seismology: The relativity between special positions of certain planets (es- pecially the moon and another planet) relative to the seismic active areas on the earth and the occurrence time of major damaging earthquakes in these areas on the earth, the relativity between the dates of magnetic storms on the earth and the occurrence dates of major damaging earthquakes on the earth, as well as certain cycle relativity be- tween the occurrence dates of major historical earthquakes occurring in relative areas on the earth. (2) Precursor analysis: With own-developed sensors and instruments, nu- merous precursors were recorded. In most cases, these precursors can not be detected by conventional seismological sensors/instruments. Through exploratory practice and theoretical studies, various relativity between different characteristics of the precur- sors, and the occurrence time, epicenter location and magnitude of the developing earthquake were identified and can be calculated. Through approaches quite differ- ent to conventional methods, successful predictions of quite a large number of earth- quakes have been achieved, including earthquakes that occurred in mainland China, Taiwan and Japan. (3) Earthquake imminent affirmative confirmation: With a special instrument, the background of imminent state of earthquakes can be identified, and a universal earthquake imminent signal is further identified. It can be used to confirm if an earlier predicted earthquake is entering its imminent state, if it will definitely occur, or if an earlier prediction can be released. (4) 5km, 7km and 10km depth com- parative terrestrial stress survey measurement to identify earthquake focus zones in surveyed areas. Then, with an eight

  5. A Test of a Strong Ground Motion Prediction Methodology for the 7 September 1999, Mw=6.0 Athens Earthquake

    SciTech Connect

    Hutchings, L; Ioannidou, E; Voulgaris, N; Kalogeras, I; Savy, J; Foxall, W; Stavrakakis, G

    2004-08-06

    We test a methodology to predict the range of ground-motion hazard for a fixed magnitude earthquake along a specific fault or within a specific source volume, and we demonstrate how to incorporate this into probabilistic seismic hazard analyses (PSHA). We modeled ground motion with empirical Green's functions. We tested our methodology with the 7 September 1999, Mw=6.0 Athens earthquake, we: (1) developed constraints on rupture parameters based on prior knowledge of earthquake rupture processes and sources in the region; (2) generated impulsive point shear source empirical Green's functions by deconvolving out the source contribution of M < 4.0 aftershocks; (3) used aftershocks that occurred throughout the area and not necessarily along the fault to be modeled; (4) ran a sufficient number of scenario earthquakes to span the full variability of ground motion possible; (5) found that our distribution of synthesized ground motions span what actually occurred and their distribution is realistically narrow; (6) determined that one of our source models generates records that match observed time histories well; (7) found that certain combinations of rupture parameters produced ''extreme'' ground motions at some stations; (8) identified that the ''best fitting'' rupture models occurred in the vicinity of 38.05{sup o} N 23.60{sup o} W with center of rupture near 12 km, and near unilateral rupture towards the areas of high damage, and this is consistent with independent investigations; and (9) synthesized strong motion records in high damage areas for which records from the earthquake were not recorded. We then developed a demonstration PSHA for a source region near Athens utilizing synthesized ground motion rather that traditional attenuation. We synthesized 500 earthquakes distributed throughout the source zone likely to have Mw=6.0 earthquakes near Athens. We assumed an average return period of 1000 years for this magnitude earthquake in the particular source zone

  6. Turning the rumor of May 11, 2011 earthquake prediction In Rome, Italy, into an information day on earthquake hazard

    NASA Astrophysics Data System (ADS)

    Amato, A.; Cultrera, G.; Margheriti, L.; Nostro, C.; Selvaggi, G.; INGVterremoti Team

    2011-12-01

    A devastating earthquake had been predicted for May 11, 2011 in Rome. This prediction was never released officially by anyone, but it grew up in the Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions. Indeed, around May 11, 2011, a planetary alignment was really expected and this contributed to give credibility to the earthquake prediction among people. During the previous months, INGV was overwhelmed with requests for information about this supposed prediction by Roman inhabitants and tourists. Given the considerable mediatic impact of this expected earthquake, INGV decided to organize an Open Day in its headquarter in Rome for people who wanted to learn more about the Italian seismicity and the earthquake as natural phenomenon. The Open Day was preceded by a press conference two days before, in which we talked about this prediction, we presented the Open Day, and we had a scientific discussion with journalists about the earthquake prediction and more in general on the real problem of seismic risk in Italy. About 40 journalists from newspapers, local and national tv's, press agencies and web news attended the Press Conference and hundreds of articles appeared in the following days, advertising the 11 May Open Day. The INGV opened to the public all day long (9am - 9pm) with the following program: i) meetings with INGV researchers to discuss scientific issues; ii) visits to the seismic monitoring room, open 24h/7 all year; iii) guided tours through interactive exhibitions on earthquakes and Earth's deep structure; iv) lectures on general topics from the social impact of rumors to seismic risk reduction; v) 13 new videos on channel YouTube.com/INGVterremoti to explain the earthquake process and give updates on various aspects of seismic monitoring in Italy; vi) distribution of books and brochures. Surprisingly, more than 3000 visitors came to visit INGV

  7. Focal mechanism determination using high-frequency waveform matching and its application to small magnitude induced earthquakes

    NASA Astrophysics Data System (ADS)

    Li, Junlun; Zhang, Haijiang; Sadi Kuleli, H.; Nafi Toksoz, M.

    2011-03-01

    We present a new method using high-frequency full waveform information to determine the focal mechanisms of small, local earthquakes monitored by a sparse surface network. During the waveform inversion, we maximize both the phase and amplitude matching between the observed and modelled waveforms. In addition, we use the polarities of the first P-wave arrivals and the average S/P amplitude ratios to better constrain the matching. An objective function is constructed to include all four criteria. An optimized grid search method is used to search over all possible ranges of source parameters (strike, dip and rake). To speed up the algorithm, a library of Green's functions is pre-calculated for each of the moment tensor components and possible earthquake locations. Optimizations in filtering and cross correlation are performed to further speed the grid search algorithm. The new method is tested on a five-station surface network used for monitoring induced seismicity at a petroleum field. The synthetic test showed that our method is robust and efficient to determine the focal mechanism when using only the vertical component of seismograms in the frequency range of 3-9 Hz. The application to dozens of induced seismic events showed satisfactory waveform matching between modelled and observed seismograms. The majority of the events have a strike direction parallel with the major NE-SW faults in the region. The normal faulting mechanism is dominant, which suggests the vertical stress is larger than the horizontal stress.

  8. On Possibility To Using Deep-wells Geo-observatories For The Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Esipko, O. A.; Rosaev, A. E.

    The problem of earthquake prediction has a significant interest. Taking into account both internal and external factors are necessary. Some publications, attempt to correlate time of seismic events with tides, and show ability of the earthquake prediction, based geophysical fields observations, on are known. In according with our studying earthquake catalogue, most close before Spitak (07.12.1988), significant earthquake was at Caucasus 23.09.1988 in accompaniment Afganistan earthquake 25.09.1988. We had earthquake in Tajikistan after Spitak 22.01.1989 . All thus events take place approximately at similar phase of monthly tide. On the other side, measurements in geo-observatories, based on deep wells, show strong correlation in variations some of geophysical fields and cosmic factors. We study thermal field's variations in Tyrnyaus deep well (North Caucasus) before and after Spitak earthquake. The changes of thermal field, which may be related with catastrophic event were detected. The comparison of according isotherms show, that mean thermal gradient remarkable decrease just before earthquake. The development of monitoring over geothermic fields variations, understanding of their nature, and methods of taking into account seasonal gravitation and electromagnetic variations at the seismic variations detection give us an ability to close for a forecast problem solution. The main conclusions are: 1)Tidal forces are important factor for catastrophic Spitak earthquake generation; 2)Control over geophysical fields variations in well's geo-observatories based in seismic active regions, may allow us to understand the character of change physical parameters before earthquake. It gives ability to develop method of earthquake prediction.

  9. Predicted Liquefaction in the Greater Oakland and Northern Santa Clara Valley Areas for a Repeat of the 1868 Hayward Earthquake

    NASA Astrophysics Data System (ADS)

    Holzer, T. L.; Noce, T. E.; Bennett, M. J.

    2008-12-01

    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by latest Holocene alluvial fan levee deposits where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906. The liquefaction scenario maps were created with ArcGIS ModelBuilder. Peak ground accelerations first were computed with the new Boore and Atkinson NGA attenuation relation (2008, Earthquake Spectra, 24:1, p. 99-138), using VS30 to account for local site response. Spatial liquefaction probabilities were then estimated using the predicted ground motions

  10. Exaggerated Claims About Success Rate of Earthquake Predictions: "Amazing Success" or "Remarkably Unremarkable"?

    NASA Astrophysics Data System (ADS)

    Kafka, A. L.; Ebel, J. E.

    2005-12-01

    On October 1, 2004, NASA announced on its web site, "Earthquake Forecast Program Has Amazing Success Rate." This announcement claimed that the Rundle-Tiampo earthquake forecast method has accurately predicted the locations of 15 of California's 16 largest earthquakes this decade. Since words like "amazing" carry a lot of meaning to consumers of scientific information, claims of "amazing success" should be limited only to cases where the success is truly amazing. We evaluated the statistical likelihood of the reported success rate of the Rundle-Tiampo prediction method by applying a cellular seismology approach to investigate whether proximity to past earthquakes is a sufficient hypothesis to yield the same level of success as the Rundle-Tiampo method. To delineate where to expect future earthquakes, we used the epicenters of the ANSS earthquake catalog for California from 1932 through 1999 with magnitude≥4.0 ("before" earthquakes). We then tested how many of the 15 events that are shown on the NASA web page ("after" earthquakes) occurred near the "before" earthquake epicenters. We found that with only a 4 km radius around each "before" earthquake epicenter, we successfully forecast the locations of 13/15 (87%) of the "after" earthquakes, and with a 7 km radius we successfully forecast 14/15 (93%) of the earthquakes. The zones created by filling in a 7 km radius around the "before" epicenters cover 18% of the study area. The scorecard maps on the JPL "QuakeSim" web site show an 11 km margin of error for the epicenters of the forecast earthquakes. With an 11 km radius around the past epicenters (covering 31% of the map area), we catch 14/15 of the "after" earthquakes. We conclude that the success rate referred to in the NASA announcement is perhaps better characterized as "remarkably unremarkable", rather than "amazing." The 14/15 success rate for the earthquakes listed on the NASA scorecard is not a rigorous test of the Rundle-Tiampo method, since it appears that

  11. Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Pisarenko, V. F.; Sornette, A.; Sornette, D.; Rodkin, M. V.

    2014-08-01

    The present work is a continuation and improvement of the method suggested in P isarenko et al. (Pure Appl Geophys 165:1-42, 2008) for the statistical estimation of the tail of the distribution of earthquake sizes. The chief innovation is to combine the two main limit theorems of Extreme Value Theory (EVT) that allow us to derive the distribution of T-maxima (maximum magnitude occurring in sequential time intervals of duration T) for arbitrary T. This distribution enables one to derive any desired statistical characteristic of the future T-maximum. We propose a method for the estimation of the unknown parameters involved in the two limit theorems corresponding to the Generalized Extreme Value distribution (GEV) and to the Generalized Pareto Distribution (GPD). We establish the direct relations between the parameters of these distributions, which permit to evaluate the distribution of the T-maxima for arbitrary T. The duality between the GEV and GPD provides a new way to check the consistency of the estimation of the tail characteristics of the distribution of earthquake magnitudes for earthquake occurring over an arbitrary time interval. We develop several procedures and check points to decrease the scatter of the estimates and to verify their consistency. We test our full procedure on the global Harvard catalog (1977-2006) and on the Fennoscandia catalog (1900-2005). For the global catalog, we obtain the following estimates: = 9.53 ± 0.52 and = 9.21 ± 0.20. For Fennoscandia, we obtain = 5.76 ± 0.165 and = 5.44 ± 0.073. The estimates of all related parameters for the GEV and GPD, including the most important form parameter, are also provided. We demonstrate again the absence of robustness of the generally accepted parameter characterizing the tail of the magnitude-frequency law, the maximum possible magnitude M max, and study the more stable parameter Q T ( q), defined as the q-quantile of the distribution of T-maxima on a future interval of duration T.

  12. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  13. Aftershock activity of a M2 earthquake in a deep South African gold mine - spatial distribution and magnitude-frequency relation

    NASA Astrophysics Data System (ADS)

    Naoi, M. M.; Nakatani, M.; Kwiatek, G.; Plenkers, K.; Yabe, Y.

    2009-12-01

    An earthquake of M 2.1 occurred on December 27, 2007 in a deep South African gold mine (Yabe et al., 2008). It occurred within a sensitive high frequency seismic network consisting of eight high frequency AE sensors (up to 200 kHz) and a tri-axial accelerometer (up to 25 kHz). Within 150 hours following the earthquake, our AE network detected more than 20,000 events within 250 m of the center of the network. We have located aftershocks assuming homogeneous medium (Fig. a), based on their manually-picked arrival times of P and S waves. This aftershock seismicity can be clearly separated into five clusters. Each sequence obeyed Omori ‘s law and had the similar p-value (p ~ 1.3). The cluster A in Fig. a is very planar. More than 90 % aftershocks of the cluster are within a 3 m thickness while the cluster has a lateral dimension of ~100m x 100m. The density of aftershocks normal to the planar cluster follows an exponential distribution with about 0.6 m characteristic length. The distribution of the cluster A coincides with one of the nodal planes of the main shock estimated by the waveform inversion. Hence, cluster A is thought to delineate the main rupture. Clusters B to E coincide with the edge of mining cavity or background seismicity recognized before the mainshock. Remarkable off-fault aftershock activities occurred only in these four areas. We have determined moment magnitude (Mw) of 17,350 earthquakes using AE waveforms (Mw > -5.4). As AE sensors have complex frequency characteristics, we use the amplitude in a narrow frequency band (2 - 4 kHz). Directivity of the AE sensor (~20 db) is corrected by comparison with the accelerometer record. Absolute magnitude has been given by an empirical relationship between AE amplitude and Mw determined by the spectral level of the accelerometer record. Mw determination from accelerometer record was done for ~ 0.5 % of aftershocks detected by AE sensors. Moment magnitudes of these selected earthquakes resulted in values

  14. Seismicity as a guide to global tectonics and earthquake prediction.

    NASA Technical Reports Server (NTRS)

    Sykes, L. R.

    1972-01-01

    From seismicity studies, evidence is presented for several aspects of plate-tectonic theory, including ideas of sea-floor spreading, transform faulting and underthrusting of the lithosphere in island arcs. Recent advances in seismic instrumentation, the use of computers in earthquake location, and the installation of local networks of instruments are shown to have vastly increased the data available for seismicity studies. It is pointed out that most of the world's earthquakes are located in very narrow zones along active plate margins and are intimately related to global processes in an extremely coherent manner. Important areas of uncertainty calling for further studies are also pointed out.

  15. Positive feedback, memory, and the predictability of earthquakes

    PubMed Central

    Sammis, C. G.; Sornette, D.

    2002-01-01

    We review the “critical point” concept for large earthquakes and enlarge it in the framework of so-called “finite-time singularities.” The singular behavior associated with accelerated seismic release is shown to result from a positive feedback of the seismic activity on its release rate. The most important mechanisms for such positive feedback are presented. We solve analytically a simple model of geometrical positive feedback in which the stress shadow cast by the last large earthquake is progressively fragmented by the increasing tectonic stress. PMID:11875202

  16. Rapid calculation of a Centroid Moment Tensor and waveheight predictions around the north Pacific for the 2011 off the Pacific coast of Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Polet, Jascha; Thio, Hong Kie

    2011-07-01

    We present the results of a near real-time determination of a Centroid Moment Tensor for the 2011 Tohoku quake and the subsequent rapid prediction of Pacific coast tsunami waveheights based on these CMT parameters. Initial manual CMT results for this event were obtained within 23 minutes of origin time and fully automatic results were distributed by E-mail within 33 minutes. The mechanism, depth and moment magnitude were all well constrained, as was indicated by a bootstrapping analysis. Using an existing library of tsunami Green's functions, we computed predicted waveheights in the north Pacific for several scenarios of the Tohoku earthquake that are consistent with the CMT solution. Overall, these predicted waveheights correspond well with preliminary observations around the Pacific Rim. The predictions for North America were sent out three and a half hours after the origin time of the earthquake, but this system has the potential to provide these predictions within minutes after receiving the CMT solution.

  17. The Parkfield earthquake prediction of October 1992; the emergency services response

    USGS Publications Warehouse

    Andrews, R.

    1992-01-01

    The science of earthquake prediction is interesting and worthy of support. In many respects the ultimate payoff of earthquake prediction or earthquake forecasting is how the information can be used to enhance public safety and public preparedness. This is a particularly important issue here in California where we have such a high level of seismic risk historically, and currently, as a consequence of activity in 1989 in the San Francisco Bay Area, in Humboldt County in April of this year (1992), and in southern California in the Landers-Big Bear area in late June of this year (1992). We are currently very concerned about the possibility of a major earthquake, one or more, happening close to one of our metropolitan areas. Within that context, the Parkfield experiment becomes very important. 

  18. Unusual Animal Behavior Preceding the 2011 Earthquake off the Pacific Coast of Tohoku, Japan: A Way to Predict the Approach of Large Earthquakes

    PubMed Central

    Yamauchi, Hiroyuki; Uchiyama, Hidehiko; Ohtani, Nobuyo; Ohta, Mitsuaki

    2014-01-01

    Simple Summary Large earthquakes (EQs) cause severe damage to property and people. They occur abruptly, and it is difficult to predict their time, location, and magnitude. However, there are reports of abnormal changes occurring in various natural systems prior to EQs. Unusual animal behaviors (UABs) are important phenomena. These UABs could be useful for predicting EQs, although their reliability has remained uncertain yet. We report on changes in particular animal species preceding a large EQ to improve the research on predicting EQs. Abstract Unusual animal behaviors (UABs) have been observed before large earthquakes (EQs), however, their mechanisms are unclear. While information on UABs has been gathered after many EQs, few studies have focused on the ratio of emerged UABs or specific behaviors prior to EQs. On 11 March 2011, an EQ (Mw 9.0) occurred in Japan, which took about twenty thousand lives together with missing and killed persons. We surveyed UABs of pets preceding this EQ using a questionnaire. Additionally, we explored whether dairy cow milk yields varied before this EQ in particular locations. In the results, 236 of 1,259 dog owners and 115 of 703 cat owners observed UABs in their pets, with restless behavior being the most prominent change in both species. Most UABs occurred within one day of the EQ. The UABs showed a precursory relationship with epicentral distance. Interestingly, cow milk yields in a milking facility within 340 km of the epicenter decreased significantly about one week before the EQ. However, cows in facilities farther away showed no significant decreases. Since both the pets’ behavior and the dairy cows’ milk yields were affected prior to the EQ, with careful observation they could contribute to EQ predictions. PMID:26480033

  19. Real time test of the long-range aftershock algorithm as a tool for mid-term earthquake prediction in Southern California

    NASA Astrophysics Data System (ADS)

    Prozorov, A. G.; Schreider, S. Yu.

    1990-04-01

    Result of the algorithm of earthquake prediction, published in 1982, is examined in this paper. The algorithm is based on the hypothesis of long-range interaction between strong and moderate earthquakes in a region. It has been applied to the prediction of earthquakes with M≥6.4 in Southern California for the time interval 1932 1979. The retrospective results were as follows: 9 out of 10 strong earthquakes were predicted with average spatial accuracy of 58 km and average delay time (the time interval between a strong earthquake and its best precursor) 9.4 years varying from 0.8 to 27.9 years. During the time interval following the period studied in that publication, namely in 1980 1988, four earthquakes occurred in the region which had a magnitude of M≥6.4 at least in one of the catalogs: Caltech or NOAA. Three earthquakes—Coalinga of May, 1983, Chalfant Valley of July, 1985 and Superstition Hills of November, 1987—were successfully predicted by the published algorithm. The missed event is a couple of two Mammoth Lake earthquakes of May, 1980 which we consider as one event due to their time-space closeness. This event occurred near the northern boundary of the region, and it also would have been predicted if we had moved the northern boundary from 38°N to the 39°N; the precision of the prediction in this case would be 30 km. The average area declared by the algorithm as the area of increased probability of strong earthquake, e.g., the area within 111-km distance of all long-range aftershocks currently present on the map of the region during 1980 1988 is equal to 47% of the total area of the region if the latter is measured in accordance with the density distribution of earthquakes in California, approximated by the catalog of earthquakes with M≥5. In geometrical terms it is approximately equal to 17% of the total area. Thus the result of the real time test shows a 1.6 times increase of the occurrence of C-events in the alarmed area relative to the

  20. From Earthquake Prediction Research to Time-Variable Seismic Hazard Assessment Applications

    NASA Astrophysics Data System (ADS)

    Bormann, Peter

    2011-01-01

    The first part of the paper defines the terms and classifications common in earthquake prediction research and applications. This is followed by short reviews of major earthquake prediction programs initiated since World War II in several countries, for example the former USSR, China, Japan, the United States, and several European countries. It outlines the underlying expectations, concepts, and hypotheses, introduces the technologies and methodologies applied and some of the results obtained, which include both partial successes and failures. Emphasis is laid on discussing the scientific reasons why earthquake prediction research is so difficult and demanding and why the prospects are still so vague, at least as far as short-term and imminent predictions are concerned. However, classical probabilistic seismic hazard assessments, widely applied during the last few decades, have also clearly revealed their limitations. In their simple form, they are time-independent earthquake rupture forecasts based on the assumption of stable long-term recurrence of earthquakes in the seismotectonic areas under consideration. Therefore, during the last decade, earthquake prediction research and pilot applications have focused mainly on the development and rigorous testing of long and medium-term rupture forecast models in which event probabilities are conditioned by the occurrence of previous earthquakes, and on their integration into neo-deterministic approaches for improved time-variable seismic hazard assessment. The latter uses stress-renewal models that are calibrated for variations in the earthquake cycle as assessed on the basis of historical, paleoseismic, and other data, often complemented by multi-scale seismicity models, the use of pattern-recognition algorithms, and site-dependent strong-motion scenario modeling. International partnerships and a global infrastructure for comparative testing have recently been developed, for example the Collaboratory for the Study of

  1. Tsunami hazard warning and risk prediction based on inaccurate earthquake source parameters

    NASA Astrophysics Data System (ADS)

    Goda, Katsuichiro; Abilova, Kamilla

    2016-02-01

    This study investigates the issues related to underestimation of the earthquake source parameters in the context of tsunami early warning and tsunami risk assessment. The magnitude of a very large event may be underestimated significantly during the early stage of the disaster, resulting in the issuance of incorrect tsunami warnings. Tsunamigenic events in the Tohoku region of Japan, where the 2011 tsunami occurred, are focused on as a case study to illustrate the significance of the problems. The effects of biases in the estimated earthquake magnitude on tsunami loss are investigated using a rigorous probabilistic tsunami loss calculation tool that can be applied to a range of earthquake magnitudes by accounting for uncertainties of earthquake source parameters (e.g., geometry, mean slip, and spatial slip distribution). The quantitative tsunami loss results provide valuable insights regarding the importance of deriving accurate seismic information as well as the potential biases of the anticipated tsunami consequences. Finally, the usefulness of rigorous tsunami risk assessment is discussed in defining critical hazard scenarios based on the potential consequences due to tsunami disasters.

  2. Simulation of broadband ground motion including nonlinear soil effects for a magnitude 6.5 earthquake on the Seattle fault, Seattle, Washington

    USGS Publications Warehouse

    Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.

    2002-01-01

    The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.

  3. Is Earthquake Prediction Possible from Short-Term Foreshocks?

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Gerassimos; Avlonitis, Markos; Di Fiore, Boris; Minadakis, George

    2015-04-01

    Foreshocks preceding mainshocks in the short-term, ranging from minutes to a few months prior the mainshock, have been known from several decades ago. Understanding the generation mechanisms of foreshocks was supported by seismicity observations and statistics, laboratory experiments, theoretical considerations and simulation results. However, important issues remain open. For example, (1) How foreshocks are defined? (2) Why only some mainshocks are preceded by foreshocks but others do not? (2) Is the mainshock size dependent on some attributes of the foreshock sequence? (3) Is that possible to discriminate foreshocks from other seismicity styles (e.g. swarms, aftershocks)? To approach possible replies to these issues we reviewed about 400 papers, reports, books and other documents referring to foreshocks as well as to relevant laboratory experiments. We found that different foreshock definitions are used by different authors. We found also that the ratio of mainshocks preceded by foreshocks increases with the increase of monitoring capabilities and that foreshock activity is dependent on source mechanical properties and favoured by material heterogeneity. Also, the mainshock size does not depend on the largest foreshock size but rather by the foreshock area. Seismicity statistics may account for an effective discrimination of foreshocks from other seismicity styles since during foreshock activities the seismicity rate increases with the inverse of time and, at the same, the b-value of the G-R relationship as a rule drops significantly. Our literature survey showed that only the last years the seismicity catalogs organized in some well monitored areas are adequately complete to search for foreshock activities. Therefore, we investigated for a set of "good foreshock examples" covering a wide range of mainshock magnitudes from 4.5 to 9 in Japan (Tohoku 2011), S. California, Italy (including L' Aquila 2009) and Greece. The good examples used indicate that foreshocks

  4. Geometrical Scaling of the Magnitude Frequency Statistics of Fluid Injection Induced Earthquakes and Implications for Assessment and Mitigation of Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Dinske, C.; Shapiro, S. A.

    2015-12-01

    To study the influence of size and geometry of hydraulically perturbed rock volumes on the magnitude statistics of induced events, we compare b value and seismogenic index estimates derived from different algorithms. First, we use standard Gutenberg-Richter approaches like least square fit and maximum likelihood technique. Second, we apply the lower bound probability fit (Shapiro et al., 2013, JGR, doi:10.1002/jgrb.50264) which takes the finiteness of the perturbed volume into account. The different estimates systematically deviate from each other and the deviations are larger for smaller perturbed rock volumes. It means that the frequency-magnitude distribution is most affected for small injection volume and short injection time resulting in a high apparent b value. In contrast, the specific magnitude value, the quotient of seismogenic index and b value (Shapiro et al., 2013, JGR, doi:10.1002/jgrb.50264), appears to be a unique seismotectonic parameter of a reservoir location. Our results confirm that it is independent of the size of perturbed rock volume. The specific magnitude is hence an indicator of the magnitudes that one can expect for a given injection. Several performance tests to forecast the magnitude frequencies of induced events show that the seismogenic index model provides reliable predictions which confirm its applicability as a forecast tool, particularly, if applied in real-time monitoring. The specific magnitude model can be used to predict an asymptotical upper limit of probable frequency-magnitude distributions of induced events. We also conclude from our analysis that the physical process of pore pressure diffusion for the event triggering and the scaling of their frequency-magnitude distribution by the size of perturbed rock volume well depicts the presented relation between upper bound of maximum seismic moment and injected fluid volume (McGarr, 2014, JGR, doi:10.1002/2013JB010597), particularly, if nonlinear effects in the diffusion process

  5. An Update on the Activities of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Liukis, M.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Werner, M. J.; Jordan, T. H.

    2013-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecast experiments. There are now CSEP testing centers in California, New Zealand, Japan, and Europe, and 364 models are under evaluation. In this presentation, we describe how the testing center hosted by the Southern California Earthquake Center (SCEC) has evolved to meet CSEP objectives and share our experiences in operating the center. The SCEC testing center has been operational since September 1, 2007, and currently hosts 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the western Pacific, and a global testing region. We are currently working to reduce testing latency and to develop procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a Department of Homeland Security project to register and test external forecast procedures from experts outside seismology. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss how we apply CSEP infrastructure to geodetic transient detection and the evaluation of ShakeAlert system for earthquake early warning (EEW), and how CSEP procedures are being adopted for intensity prediction and ground motion prediction experiments. cseptesting.org

  6. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  7. Resting EEG in Alpha and Beta Bands Predicts Individual Differences in Attentional Blink Magnitude

    ERIC Educational Resources Information Center

    MacLean, Mary H.; Arnell, Karen M.; Cote, Kimberly A.

    2012-01-01

    Accuracy for a second target (T2) is reduced when it is presented within 500 ms of a first target (T1) in a rapid serial visual presentation (RSVP)--an attentional blink (AB). There are reliable individual differences in the magnitude of the AB. Recent evidence has shown that the attentional approach that an individual typically adopts during a…

  8. Success! Detailed Pre-event Analysis Identified the Slip Area and Magnitude of the Sept. 2012 MW 7.6 Nicoya Earthquake

    NASA Astrophysics Data System (ADS)

    Newman, A. V.; Protti, M.; Gonzalez, V. M.; Dixon, T. H.; Schwartz, S. Y.; Feng, L.; Peng, Z.; Marshall, J.; Malservisi, R.; Owen, S. E.

    2013-05-01

    On September 5th, 2012 a moment magnitude (MW) 7.6 earthquake struck the seismogenic megathrust of Nicoya Costa Rica. Though, we knew not precisely when, this event was not unexpected, and occurred after the development of substantial pre-event scientific discovery and earthquake infrastructural development. Beginning in the late- 1990's Nicoya Costa Rica was recognized by the U.S. National Science Foundation -MARGINS program as a focus area for seismogenic zone studies in part because of the unique proximity of land to the active subduction megathrust. The region also has very fast convergence (~9 cm/a) and has suffered from regular M7+ earthquakes in 1853, 1900 and 1950. Another similar event was expected by many. Pre-event analysis identified the structure of the subduction interface [Newman et al., GRL, 2002; DeShon et al., GJI, 2006], the location and rate changes of ongoing microseismicity [Newman et al., GRL, 2002, Ghosh et al.,GRL, 2008], the location and degree of locking that developed during the late interseismic [Norabuena et al., JGR, 2006; Feng et al., JGR, 2012], and its relation to ongoing low-frequency earthquakes, subduction tremor, and episodic slip events [Walter et al., GRL, 2011; Outerbridge et al., JGR, 2010, Jiang et al., G3, 2012]. Feng et al., [2012] using campaign and continuous GPS data through 2012, identified complex locked 50x50 km patch along the central coast of Nicoya, the locale that failed in Sept 2012, and concluded that the region had the potential to fail in an MW 7.8 event should the most recent locking be representative of behavior since the last major event in 1950. In operation at the time of the event was a substantial NSF-funded continuous GPS (17 station) and seismic (18 station) network maintained by USF, UCSC, and GIT, in cooperation with OVSICORI. The seismic network captured the initial motions of the mainshock before clipping, as well as pre-shock and aftershock activity [Walter et al., (this meeting), 2013]. The

  9. Can an earthquake prediction and warning system be developed?

    USGS Publications Warehouse

    N.N, Ambraseys

    1990-01-01

    Over the last 20 years, natural disasters have killed nearly 3 million people and disrupted the lives of over 800 million others. In 2 years there were more than 50 serious natural disasters, including landslides in Italy, France, and Colombia; a typhoon in Korea; wildfires in China and the United States; a windstorm in England; grasshopper plagues in Africa's horn and the Sahel; tornadoes in Canada; devastating earthquakes in Soviet Armenia and Tadzhikstand; infestations in Africa; landslides in Brazil; and tornadoes in the United States 

  10. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  11. Temperatures of Thermal and Slightly Thermal Springs on Mount Hood, Oregon, Apparently Unperturbed by the Magnitude-4.5 Earthquake on June 29, 2002

    NASA Astrophysics Data System (ADS)

    Nathenson, M.; Mariner, R. H.

    2002-12-01

    On the basis of water chemistry, three distinct hydrothermal systems have been identified on Mount Hood. Swim Warm Springs has a series of vents with temperatures ranging from 9° to 25°C with temperatures determined by mixing of thermal and nonthermal water. The hottest feature was 25.6° to 26.2°C in 1976-78, 25°C in 1997, and 24.7°C in 2001. The hot-water component is interpreted to have a source water that boiled from 187°C, re-equilibrated at 96°C, and then mixed with nonthermal water to produce the range of compositions found in various springs. The Meadows Spring is a slightly thermal spring with measured temperatures of 4.8°, 6.1°, 6.6°C in 1997, 1999, and 2001 related to mixing of thermal and nonthermal water. The hot-water component is interpreted to have a source water that boiled from 223°C, re-equilibrated at 94°C, and then mixed with nonthermal water to produce the range of compositions found in the spring over several years. Both systems contain water from precipitation at high elevation. The summit fumaroles have gas-geothermometer temperatures generally over 300°C, indicating that they are not the steam discharge from the Swim and Meadows hydrothermal systems. Field measurements in July-August, 2002, after the magnitude-4.5 earthquake of June 29, 2002, showed that the highest-temperature vent at Swim Warm Springs was 25.7°C, similar to values found in other years. Measurements on a hot afternoon and a cool morning yielded temperatures of 25.7° and 25.2°C, indicating that this low-flow feature is subject to some solar heating. The Meadows Spring was 6.3°C, consistent with its previous behavior of mixing. The lower temperature indicates that there is a variability associated with unknown hydrologic factors rather than confirming an apparent trend of continuously increasing temperatures for the 1997-2001 period. The Crater Rock fumarole was 89°C, similar to previous measurements. Post-earthquake measurements of spring temperatures

  12. Recent Developments within the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2014-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecast experiments. There are now CSEP testing centers in California, New Zealand, Japan, and Europe, with 430 models under evaluation. In this presentation, we describe how the Southern California Earthquake Center (SCEC) testing center has evolved to meet CSEP objectives and we share our experiences in operating the center. The SCEC testing center has been operational since September 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and a global testing region. We have reduced testing latency, implemented prototype evaluation of M8 forecasts and currently develop procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a Department of Homeland Security project to register and test external forecast procedures from experts outside seismology. Retrospective experiment for the 2010 Darfield earthquake sequence formed an important addition to the CSEP activities where the predictive skills of physics-based and statistical forecasting models are compared. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and the evaluation of ShakeAlert system for earthquake early warning (EEW), and how CSEP procedures are being adopted for intensity prediction and ground motion prediction experiments.

  13. Magnitudes of selected stellar occultation candidates for Pluto and other planets, with new predictions for Mars and Jupiter

    NASA Technical Reports Server (NTRS)

    Sybert, C. B.; Bosh, A. S.; Sauter, L. M.; Elliot, J. L.; Wasserman, L. H.

    1992-01-01

    Occultation predictions for the planets Mars and Jupiter are presented along with BVRI magnitudes of 45 occultation candidates for Mars, Jupiter, Saturn, Uranus, and Pluto. Observers can use these magnitudes to plan observations of occultation events. The optical depth of the Jovian ring can be probed by a nearly central occultation on 1992 July 8. Mars occults an unusually red star in early 1993, and the occultations for Pluto involving the brightest candidates would possibly occur in the spring of 1992 and the fall of 1993.

  14. Fluids/faults relationships and the earthquake prediction related to the April 6th 2009, L'Aquila Earthquake.

    NASA Astrophysics Data System (ADS)

    Italiano, Francesco; Martinelli, Giovanni; Bonfanti, Pietro; Lemmi, Margherita; Bovini, Sergio

    2010-05-01

    The recent seismic crises which hit the Central Italy one year ago killed 300 people among the ruins and the polemics caused by an unheeded alarm based on radon data, and focused the attention on the relationships between scientific and social/political world. It is commonly accepted that is impossible to forecast any earthquake, thus the word "prediction" is generally refused by both scientists and politicians, however all of us can provide tools to better understand how seismogenesis works on the fluids circulating over any seismic area of the world and the scientific community have to find a way to improve this knowledge by a closer cooperative work. The Earthquake prediction still represents one, among the biggest, unsolved problems for the whole humankind. The seismic crisis that struck Central Apennines (Italy, Abruzzo Region) has also clearly shown that the attempt to provide an earthquake prediction has caused alarms due to incorrect use of the scientific information (moreover taking into account only one parameter: radon), and the consequence was a credibility loss for the geochemical (and not only) scientific community. A geochemical survey was carried out just after the destructive M6.3 Earthquake following the methodological approach developed during ten years of geochemical monitoring of the nearby seismic-prone area of the Umbria Marche region, severely damaged by the 1997-98 seismic crisis. The results allowed us to understand some aspects related to the relationships between the circulating fluids and the deformed crust hit by the faulting activity. Gas samples from soils and wells (locally known as "blowing" wells) have been collected besides soil degassing measurements carried out over the fractured area. The collected data have been compared to former information on the geochemical features of the fluids circulating over Central Apennines, already hit by recent strong seismic shocks (e.g. the Umbria-Marche region). The aim was to evaluate the

  15. The Earthquake Prediction Experiment on the Basis of the Jet Stream's Precursor

    NASA Astrophysics Data System (ADS)

    Wu, H. C.; Tikhonov, I. N.

    2014-12-01

    Simultaneous analysis of the jet stream maps and EQ data of M > 6.0 have been made. 58 cases of EQ occurred in 2006-2010 were studied. It has been found that interruption or velocity flow lines cross above an epicenter of EQ take place 1-70 days prior to event. The duration was 6-12 hours. The assumption is that jet stream will go up or down near an epicenter. In 45 cases the distance between epicenters and jet stream's precursor does not exceed 90 km. The forecast during 30 days before the EQ was 66.1 % (Wu and Tikhonov, 2014). This technique has been used to predict the strong EQ and pre-registered on the website (for example, the 23 October 2011, M 7.2 EQ (Turkey); the 20 May 2012, M 6.1 EQ (Italy); the 16 April 2013, M 7.8 EQ (Iran); the 12 November 2013, M 6.6 EQ (Russia); the 03 March 2014, M 6.7 Ryukyu EQ (Japan); the 21 July 2014, M 6.2 Kuril EQ). We obtain satisfactory accuracy of the epicenter location. As well we define the short alarm period. That's the positive aspects of forecast. However, estimates of magnitude contain a big uncertainty. Reference Wu, H.C., Tikhonov, I.N., 2014. Jet streams anomalies as possible short-term precursors of earthquakes with M > 6.0. Research in Geophysics, Special Issue on Earthquake Precursors. Vol. 4. No 1. doi:10.4081/rg.2014.4939. The precursor of M9.0 Japan EQ on 2011/03/11(fig1). A. M6.1 Italy EQ (2012/05/20, 44.80 N, 11.19 E, H = 5.1 km) Prediction: 2012/03/20~2012/04/20 (45.6 N, 10.5 E), M > 5.5(fig2) http://ireport.cnn.com/docs/DOC-764800 B. M7.8 Iran EQ (2013/04/16, 28.11 N, 62.05 E, H = 82.0 km) Prediction: 2013/01/14~2013/02/04 (28.0 N, 61.3 E) M > 6.0(fig3) http://ireport.cnn.com/docs/DOC-910919 C. M6.6 Russia EQ (2013/11/12, 54.68 N, 162.29 E, H = 47.2 km). Prediction: 2013/10/27~2013/11/13 (56.0 N, 162.9 E) M > 5.5 http://ireport.cnn.com/docs/DOC-1053599 D. M6.7 Japan EQ (2014/03/03, 27.41 N, 127.34 E, H = 111.2 km). Prediction: 2013/12/02 ~2014/01/15 (26.7 N, 128.1 E) M > 6.5(fig4) http

  16. Bayesian prediction of earthquake network based on space-time influence domain

    NASA Astrophysics Data System (ADS)

    Zhang, Ya; Zhao, Hai; He, Xuan; Pei, Fan-Dong; Li, Guang-Guang

    2016-03-01

    Bayesian networks (BNs) are used to analyze the conditional dependencies among different events, which are expressed by conditional probability. Scientists have already investigated the seismic activities by using BNs. Recently, earthquake network is used as a novel methodology to analyze the relationships among the earthquake events. In this paper, we propose a way to predict earthquake from a new perspective. The BN is constructed after processing, which is derived from the earthquake network based on space-time influence domain. And then, the BN parameters are learnt by using the cases which are designed from the seismic data in the period between 00:00:00 on January 1, 1992 and 00:00:00 on January 1, 2012. At last, predictions are done for the data in the period between 00:00:00 on January 1, 2012 and 00:00:00 on January 1, 2015 combining the BN with the parameters. The results show that the success rate of the prediction including delayed prediction is about 65%. It is also discovered that the predictions for some nodes have high rate of accuracy under investigation.

  17. How to predict Italy L'Aquila M6.3 earthquake

    NASA Astrophysics Data System (ADS)

    Guo, Guangmeng

    2016-04-01

    According to the satellite cloud anomaly appeared over eastern Italy on 21-23 April 2012, we predicted the M6.0 quake occurred in north Italy successfully. Here checked the satellite images in 2011-2013 in Italy, and 21 cloud anomalies were found. Their possible correlation with earthquakes bigger than M4.7 which located in Italy main fault systems was statistically examined by assuming various lead times. The result shows that when the leading time interval is set to 23≤ΔT≤45 days, 8 of the 10 quakes were preceded by cloud anomalies. Poisson random test shows that AAR (anomaly appearance rate) and EOR (EQ occurrence rate) is much higher than the values by chance. This study proved the relation between cloud anomaly and earthquake in Italy. With this method, we found that L'Aquila earthquake can also be predicted according to cloud anomaly.

  18. Predicted liquefaction of East Bay fills during a repeat of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Blair, J.L.; Noce, T.E.; Bennett, M.J.

    2006-01-01

    Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M ??? 7.8 earthquake on the northern San Andreas Fault. ?? 2006, Earthquake Engineering Research Institute.

  19. The Color-Magnitude Relation of Cluster Galaxies: Observations and Model Predictions

    NASA Astrophysics Data System (ADS)

    Jiménez, N.; Smith Castelli, A. V.; Cora, S. A.; Bassino, L. P.

    We investigate the origin of the color-magnitude relation (CMR) observed in cluster galaxies by using a combination of cosmological N-body/SPH simulations of galaxy clusters, and a semi-analaytic model of galaxy formation (Lagos, Cora & Padilla 2008). Simulated results are compared with the photometric properties of early-type galaxies in the Antlia cluster (Smith Castelli et al. 2008). The good agreement obtained between observations and simulations allows us to use the information provided by the model for unveiling the physical processes that yield the tigh observed CMR.

  20. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  1. Ground Motion Prediction of Subduction Earthquakes using the Onshore-Offshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2014-12-01

    Seismic waves produced by earthquakes already caused plenty of damages all around the world and are still a real threat to human beings. To reduce seismic risk associated with future earthquakes, accurate ground motion predictions are required, especially for cities located atop sedimentary basins that can trap and amplify these seismic waves. We focus this study on long-period ground motions produced by subduction earthquakes in Japan which have the potential to damage large-scale structures, such as high-rise buildings, bridges, and oil storage tanks. We extracted the impulse response functions from the ambient seismic field recorded by two stations using one as a virtual source, without any preprocessing. This method allows to recover the reliable phases and relative, rather than absolute, amplitudes. To retrieve corresponding Green's functions, the impulse response amplitudes need to be calibrated using observational records of an earthquake which happened close to the virtual source. We show that Green's functions can be extracted between offshore submarine cable-based sea-bottom seismographic observation systems deployed by JMA located atop subduction zones and on-land NIED/Hi-net stations. In contrast with physics-based simulations, this approach has the great advantage to predict ground motions of moderate earthquakes (Mw ~5) at long-periods in highly populated sedimentary basin without the need of any external information about the velocity structure.

  2. Spectral models for ground motion prediction in the L'Aquila region (central Italy): evidence for stress-drop dependence on magnitude and depth

    NASA Astrophysics Data System (ADS)

    Pacor, F.; Spallarossa, D.; Oth, A.; Luzi, L.; Puglia, R.; Cantore, L.; Mercuri, A.; D'Amico, M.; Bindi, D.

    2016-02-01

    between seismic moment and local magnitude that improves the existing ones and extends the validity range to 3.0-5.8. We find a significant stress drop increase with seismic moment for events with Mw larger than 3.75, with so-called scaling parameter ε close to 1.5. We also observe that the overall offset of the stress-drop scaling is controlled by earthquake depth. We evaluate the performance of the proposed parametric models through the residual analysis of the Fourier spectra in the frequency range 0.5-25 Hz. The results show that the considered stress-drop scaling with magnitude and depth reduces, on average, the standard deviation by 18 per cent with respect to a constant stress-drop model. The overall quality of fit (standard deviation between 0.20 and 0.27, in the frequency range 1-20 Hz) indicates that the spectral model calibrated in this study can be used to predict ground motion in the L'Aquila region.

  3. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  4. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  5. Change in failure stress on the southern san andreas fault system caused by the 1992 magnitude = 7.4 landers earthquake.

    PubMed

    Stein, R S; King, G C; Lin, J

    1992-11-20

    The 28 June Landers earthquake brought the San Andreas fault significantly closer to failure near San Bernardino, a site that has not sustained a large shock since 1812. Stress also increased on the San Jacinto fault near San Bernardino and on the San Andreas fault southeast of Palm Springs. Unless creep or moderate earthquakes relieve these stress changes, the next great earthquake on the southern San Andreas fault is likely to be advanced by one to two decades. In contrast, stress on the San Andreas north of Los Angeles dropped, potentially delaying the next great earthquake there by 2 to 10 years. PMID:17778356

  6. Change in failure stress on the southern San Andreas fault system caused by the 1992 magnitude = 7.4 Landers earthquake

    USGS Publications Warehouse

    Stein, R.S.; King, G.C.P.; Lin, J.

    1992-01-01

    The 28 June Landers earthquake brought the San Andreas fault significantly closer to failure near San Bernardino, a site that has not sustained a large shock since 1812. Stress also increased on the San Jacinto fault near San Bernardino and on the San Andreas fault southeast of Palm Springs. Unless creep or moderate earthquakes relieve these stress changes, the next great earthquake on the southern San Andreas fault is likely to be advanced by one to two decades. In contrast, stress on the San Andreas north of Los Angeles dropped, potentially delaying the next great earthquake there by 2 to 10 years.

  7. Individual preparedness and mitigation actions for a predicted earthquake in Istanbul.

    PubMed

    Tekeli-Yeşil, Sıdıka; Dedeoğlu, Necati; Tanner, Marcel; Braun-Fahrlaender, Charlotte; Obrist, Birgit

    2010-10-01

    This study investigated the process of taking action to mitigate damage and prepare for an earthquake at the individual level. Its specific aim was to identify the factors that promote or inhibit individuals in this process. The study was conducted in Istanbul, Turkey--where an earthquake is expected soon--in May and June 2006 using qualitative methods. Within our conceptual framework, three different patterns emerged among the study subjects. Outcome expectancy, helplessness, a low socioeconomic level, a culture of negligence, a lack of trust, onset time/poor predictability, and normalisation bias inhibit individuals in this process, while location, direct personal experience, a higher education level, and social interaction promote them. Drawing on these findings, the paper details key points for better disaster communication, including whom to mobilise to reach target populations, such as individuals with direct earthquake experience and women. PMID:20561339

  8. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  9. Toward a Global Model for Predicting Earthquake-Induced Landslides in Near-Real Time

    NASA Astrophysics Data System (ADS)

    Nowicki, M. A.; Wald, D. J.; Hamburger, M. W.; Hearne, M.; Thompson, E.

    2013-12-01

    We present a newly developed statistical model for estimating the distribution of earthquake-triggered landslides in near-real time, which is designed for use in the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) and ShakeCast systems. We use standardized estimates of ground shaking from the USGS ShakeMap Atlas 2.0 to develop an empirical landslide probability model by combining shaking estimates with broadly available landslide susceptibility proxies, including topographic slope, surface geology, and climatic parameters. While the initial model was based on four earthquakes for which digitally mapped landslide inventories and well constrained ShakeMaps are available--the Guatemala (1976), Northridge, California (1994), Chi-Chi, Taiwan (1999), and Wenchuan, China (2008) earthquakes, our improved model includes observations from approximately ten other events from a variety of tectonic and geomorphic settings for which we have obtained landslide inventories. Using logistic regression, this database is used to build a predictive model of the probability of landslide occurrence. We assess the performance of the regression model using statistical goodness-of-fit metrics to determine which combination of the tested landslide proxies provides the optimum prediction of observed landslides while minimizing ';false alarms' in non-landslide zones. Our initial results indicate strong correlations with peak ground acceleration and maximum slope, and weaker correlations with surface geological and soil wetness proxies. In terms of the original four events included, the global model predicts landslides most accurately when applied to the Wenchuan and Chi-Chi events, and less accurately when applied to the Northridge and Guatemala datasets. Combined with near-real time ShakeMaps, the model can be used to make generalized predictions of whether or not landslides are likely to occur (and if so, where) for future earthquakes around the globe, and these estimates

  10. Real-time prediction of earthquake ground motion: time evolutional prediction using data assimilation and real-time correction of site amplification factors

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.

    2012-12-01

    In this presentation, I propose a new approach for real-time prediction of seismic ground motion which is applicable to Earthquake Early Waning (EEW). Many methods of EEW are based on a network method in which hypocenter and magnitude (source parameters) are quickly determined (that is, interpretation of current wavefield), and then the ground motions are predicted, and warnings are issued depending on the strength of the predicted ground motion. In this method, though we can predict ground motions using a few parameters (location of hypocenter, magnitude, site factors) at any points, it is necessary to determine the hypocenter and magnitude at first, and error of the source parameters leads directly to the error of the prediction. It is not easy to take the effects of rupture directivity and source extent into account, and it is impossible to fully reproduce the current wavefield from the interpreted source parameters. In general, wave motion is predictable when boundary condition and initial condition are given. Time evolutional prediction is a method based on this approach using the current wavefield as an initial condition, that is u(x, t+Δt)=H(u(x, t)), where u is the wave motion at location x at lapse time t, and H is the prediction operator. Future wave motion, u(x, t+Δt), is predicted from the distribution of the current wave motion u(x, t) using H. For H, finite difference technique or boundary integral equation method, such as Kirchhoff integral, is used. In the time evolutional prediction, determination of detailed distribution of current wave motion is a key, so that dense seismic observation network is required. Data assimilation is a technique to produce artificially denser network, which is widely used for numerical weather prediction and oceanography. Distribution of current wave motion is estimated from not only the current real observation of u(x, t), but also the prediction of one step before, H(u(x, t-Δt)). Combination of them produces denser

  11. Predicting Earthquake Occurrence at Subduction-Zone Plate Boundaries Through Advanced Computer Simulation

    NASA Astrophysics Data System (ADS)

    Matsu'Ura, M.; Hashimoto, C.; Fukuyama, E.

    2004-12-01

    In general, predicting the occurrence of earthquakes is very difficult, because of the complexity of actual faults and nonlinear interaction between them. From the standpoint of earthquake prediction, however, our target is limited to the large events that completely break down a seismogenic zone. To such large events we may apply the concept of the earthquake cycle. The entire process of earthquake generation cycles generally consists of tectonic loading due to relative plate motion, quasi-static rupture nucleation, dynamic rupture propagation and stop, and restoration of fault strength. This process can be completely described by a coupled nonlinear system, which consists of an elastic/viscoelastic slip-response function that relates fault slip to shear stress change and a fault constitutive law that prescribes change in shear strength with fault slip and contact time. The shear stress and the shear strength are related with each other through boundary conditions on the fault. The driving force of this system is observed relative plate motion. The system to describe the earthquake generation cycle is conceptually quite simple. The complexity in practical modeling mainly comes from complexity in structure of the real earth. Recently, we have developed a physics-based, predictive simulation system for earthquake generation at plate boundaries in and around Japan, where the four plates of Pacific, North American, Philippine Sea and Eurasian are interacting with each other. The simulation system consists of a crust-mantle structure model, a quasi-static tectonic loading model, and a dynamic rupture propagation model. First, we constructed a realistic 3D model of plate interfaces in and around Japan by applying an inversion technique to ISC hypocenter data, and computed viscoelastic slip-response functions for this structure model. Second, we introduced the slip- and time-dependent fault constitutive law with an inherent strength-restoration mechanism as a basic

  12. The January 26, 2001 Mw7.6 Bhuj, India, Earthquake: Observed and Predicted Ground Motions

    NASA Astrophysics Data System (ADS)

    Hough, S. E.; Martin, S.; Bilham, R.; Atkinson, G. M.

    2001-12-01

    It is unclear whether or not the 26 January, 2001, Bhuj earthquake occurred in an intraplate or interplate setting. However, to understand the damage caused by this earthquake, and the hazard posed by future similar earthquakes, one must consider not only the source setting but propagation issues as well. Although local and regional instrumental recordings of the devastating January 26, 2001, Bhuj earthquake are sparse, the distribution of macroseismic effects can provide constraints on the ground motions. We compiled news accounts describing damage and other effects and interpreted them to obtain modified Mercalli intensities at over 300 locations throughout the Indian subcontinent. These values are used to map the intensity distribution using a simple mathematical interpolation method. These maps reveal several interesting features. Significant sediment-induced amplification is suggested at a number of locations around the Gulf of Kachchh and in other areas along rivers, within deltas, or on coastal alluvium. The overall distribution of intensities also reveals extremely efficient wave propagation throughout the subcontinent: the earthquake was felt at distances as large as 2400 km and caused light damage at distances upwards of 700 km. This is consistent with earlier theoretical and observational results suggesting that higher mode surface waves (Lg waves) will propagate efficiently in intraplate crust, which forms a relatively uniform, high-Q waveguide. We use fault rupture parameters inferred from teleseismic data to predict ground motions at distances of 0-1000 km. We convert the predicted peak ground acceleration (PGA) values to MMI using a relationship between MMI and PGA that assigns MMI based on the average effects in a region. The predicted MMI's are typically lower by 1-2 units than the estimated values. We discuss two factors that probably account for this discrepancy: 1) a tendency for media accounts to focus on the most dramatic damage, rather than

  13. Test-sites for earthquake prediction experiments within the Colli Albani region

    NASA Astrophysics Data System (ADS)

    Quattrocchi, F.; Calcara, M.

    In this paper we discuss some geochemical data gathered by discrete and continuous monitoring during the 1995-1996 period, carried out for earthquake prediction test-experiments throughout the Colli Albani quiescent volcano, seat of seismicity, selecting some gas discharge sites with peri-volcanic composition. In particular we stressed the results obtained at the continuous geochemical monitoring station (GMS I, BAR site), designed by ING for geochemical surveillance of seismic events. The 12/6/1995 (M=3.6-3.8) Roma earthquake together with the 3/11/1995 (M=3.1) Tivoli earthquake was the most energetic events within the Colli Albani - Roma area, after the beginning of the continuous monitoring (1991) up today: strict correlation between these seismic events and fluid geochemical anomalies in groundwater has been discovered (temperature, Eh, 222Rn, CO 2, NH 3). Separation at depth of a vapour phase, rich in reducing-acidic gases (CO 2, H 2S, etc...), from a hyper-saline brine, within the deep geothermal reservoir is hypothesised to explain the geochemical anomalies: probably the transtensional episodes accompanying the seismic sequences caused an increasing and/or triggering of the phase-separation process and fluid migration, on the regional scale of the Western sector of the Colli Albani, beyond the seismogenic depth (2-4 Km) up to surface. We draw the state of art of GMS II monitoring prototype and the selection criteria of test-sites for earthquake prediction experiments in the Colli Albani region.

  14. Predicted Attenuation Relation and Observed Ground Motion of Gorkha Nepal Earthquake of 25 April 2015

    NASA Astrophysics Data System (ADS)

    Singh, R. P.; Ahmad, R.

    2015-12-01

    A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

  15. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

  16. Holocene behavior of the Brigham City segment: implications for forecasting the next large-magnitude earthquake on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Personius, Stephen F.; DuRoss, Christopher B.; Crone, Anthony J.

    2012-01-01

    The Brigham City segment (BCS), the northernmost Holocene‐active segment of the Wasatch fault zone (WFZ), is considered a likely location for the next big earthquake in northern Utah. We refine the timing of the last four surface‐rupturing (~Mw 7) earthquakes at several sites near Brigham City (BE1, 2430±250; BE2, 3490±180; BE3, 4510±530; and BE4, 5610±650 cal yr B.P.) and calculate mean recurrence intervals (1060–1500  yr) that are greatly exceeded by the elapsed time (~2500  yr) since the most recent surface‐rupturing earthquake (MRE). An additional rupture observed at the Pearsons Canyon site (PC1, 1240±50 cal yr B.P.) near the southern segment boundary is probably spillover rupture from a large earthquake on the adjacent Weber segment. Our seismic moment calculations show that the PC1 rupture reduced accumulated moment on the BCS about 22%, a value that may have been enough to postpone the next large earthquake. However, our calculations suggest that the segment currently has accumulated more than twice the moment accumulated in the three previous earthquake cycles, so we suspect that additional interactions with the adjacent Weber segment contributed to the long elapse time since the MRE on the BCS. Our moment calculations indicate that the next earthquake is not only overdue, but could be larger than the previous four earthquakes. Displacement data show higher rates of latest Quaternary slip (~1.3  mm/yr) along the southern two‐thirds of the segment. The northern third likely has experienced fewer or smaller ruptures, which suggests to us that most earthquakes initiate at the southern segment boundary.

  17. Radon measurements for earthquake prediction along the North Anatolian Fault Zone: a progress report

    USGS Publications Warehouse

    Friedmann, H.; Aric, K.; Gutdeutsch, R.; King, C.-Y.; Altay, C.; Sav, H.

    1988-01-01

    Radon (222Rn) concentration has been continuously measured since 1983 in groundwater at a spring and in subsurface soil gas at five sites along a 200 km segment of the North Anatolian Fault Zone near Bolu, Turkey. The groundwater radon concentration showed a significant increase before the Biga earthquake of magnitude 5.7 on 5 July 1983 at an epicentral distance of 350 km, and a long-term increase between March 1983 and April 1985. The soil-gas radon concentration showed large changes in 1985, apparently not meteorologically induced. The soil-gas and groundwater data at Bolu did not show any obvious correlation. ?? 1988.

  18. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  19. Physically-based modelling of the competition between surface uplift and erosion caused by earthquakes and earthquake sequences.

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2016-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a

  20. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  1. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-01

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  2. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  3. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  4. Predicting variations of the least principal stress magnitudes in shale gas reservoirs utilizing variations of viscoplastic properties

    NASA Astrophysics Data System (ADS)

    Sone, H.; Zoback, M. D.

    2013-12-01

    Predicting variations of the magnitude of least principal stress within unconventional reservoirs has significant practical value as these reservoirs require stimulation by hydraulic fracturing. It is common to approach this problem by calculating the horizontal stresses caused by uniaxial gravitational loading using log-derived linear elastic properties of the formation and adding arbitrary tectonic strain (or stress). We propose a new method for estimating stress magnitudes in shale gas reservoirs based on the principles of viscous relaxation and steady-state tectonic loading. Laboratory experiments show that shale gas reservoir rocks exhibit wide range of viscoplastic behavior most dominantly controlled by its composition, whose stress relaxation behavior is described by a simple power-law (in time) rheology. We demonstrate that a reasonable profile of the principal stress magnitudes can be obtained from geophysical logs by utilizing (1) the laboratory power-law constitutive law, (2) a reasonable estimate of the tectonic loading history, and (3) the assumption that stress ratios ([S2-S3]/[S1-S3]) remains constant due to stress relaxation between all principal stresses. Profiles of horizontal stress differences (SHmax-Shmin) generated based on our method for a vertical well in the Barnett shale (Ft. Worth basin, Texas) generally agrees with the occurrence of drilling-induced tensile fractures in the same well. Also, the decrease in the least principal stress (frac gradient) upon entering the limestone formation underlying the Barnett shale appears to explain the downward propagation of the hydraulic fractures observed in the region. Our approach better acknowledges the time-dependent geomechanical effects that could occur over the course of the geological history. The proposed method may prove to be particularly useful for understanding hydraulic fracture containment within targeted reservoirs.

  5. Risk Communication on Earthquake Prediction Studies -"No L'Aquila quake risk" experts probed in Italy in June 2010

    NASA Astrophysics Data System (ADS)

    Oki, S.; Koketsu, K.; Kuwabara, E.; Tomari, J.

    2010-12-01

    For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 earthquake on 30th March, the government held Major Risks Committee which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. At the press conference immediately after the committee, they reported that "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable." 6 days later, a magunitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors opened the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee the previous week. This issue becomes widely known to the seismological society especially after an email titled "Letter of Support for Italian Earthquake Scientists" from seismologists at the National Geophysics and Volcanology Institute (INGV) sent worldwide. It says that the L'Aquila Prosecutors office indicted of manslaughter the members of the Major Risks Committee and that the charges are for failing to provide a short term alarm to the population before the earthquake struck. It is true that there is no generalized method to predict earthquakes but failing the short term alarm is not the reason for the investigation of the scientists. The chief prosecutor stated that "the committee could have provided the people with better advice", and "it wasn't the case that they did not receive any warnings, because there had been tremors". The email also requests sign-on support for the open letter to the president of Italy from Earth sciences colleagues from all over the world and collected more than 5000 signatures

  6. Physics-Based Predictive Simulation Models for Earthquake Generation at Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Matsu'Ura, M.

    2002-12-01

    In the last decade there has been great progress in the physics of earthquake generation; that is, the introduction of laboratory-based fault constitutive laws as a basic equation governing earthquake rupture and the quantitative description of tectonic loading driven by plate motion. Incorporating a fault constitutive law into continuum mechanics, we can develop a physics-based_@simulation model for the entire earthquake generation process. For realistic simulation of earthquake generation, however, we need a very large, high-speed computer system. In Japan, fortunately, the Earth Simulator, which is a high performance, massively parallel-processing computer system with 10 TB memories and 40 TFLOPS peak speed, has been completed. The completion of the Earth Simulator and advance in numerical simulation methodology are bringing our vision within reach. In general, the earthquake generation cycle consists of tectonic loading due to relative plate motion, quasi-static rupture nucleation, dynamic rupture propagation and stop, and restoration of fault strength. The basic equations governing the entire earthquake generation cycle consists of an elastic/viscoelastic slip-response function that relates fault slip to shear stress change and a fault constitutive law that prescribes change in shear strength with fault slip and contact time. The shear stress and the shear strength are related with each other through the boundary conditions on the fault. The driving force of this system is observed relative plate motion. The system to describe the earthquake generation cycle is conceptually quite simple. The complexity in practical modelling mainly comes from complexity in structure of the real earth. Since 1998 our group have conducted the Crustal Activity Modelling Program (CAMP), which is one of the three main programs composing the Solid Earth Simulator project. The aim of CAMP is to develop a physics-based predictive simulation model for the entire earthquake generation

  7. Southern San Andreas Fault seismicity is consistent with the Gutenberg-Richter magnitude-frequency distribution

    USGS Publications Warehouse

    Page, Morgan T.; Felzer, Karen

    2015-01-01

    The magnitudes of any collection of earthquakes nucleating in a region are generally observed to follow the Gutenberg-Richter (G-R) distribution. On some major faults, however, paleoseismic rates are higher than a G-R extrapolation from the modern rate of small earthquakes would predict. This, along with other observations, led to formulation of the characteristic earthquake hypothesis, which holds that the rate of small to moderate earthquakes is permanently low on large faults relative to the large-earthquake rate (Wesnousky et al., 1983; Schwartz and Coppersmith, 1984). We examine the rate difference between recent small to moderate earthquakes on the southern San Andreas fault (SSAF) and the paleoseismic record, hypothesizing that the discrepancy can be explained as a rate change in time rather than a deviation from G-R statistics. We find that with reasonable assumptions, the rate changes necessary to bring the small and large earthquake rates into alignment agree with the size of rate changes seen in epidemic-type aftershock sequence (ETAS) modeling, where aftershock triggering of large earthquakes drives strong fluctuations in the seismicity rates for earthquakes of all magnitudes. The necessary rate changes are also comparable to rate changes observed for other faults worldwide. These results are consistent with paleoseismic observations of temporally clustered bursts of large earthquakes on the SSAF and the absence of M greater than or equal to 7 earthquakes on the SSAF since 1857.

  8. Accounts of damage from historical earthquakes in the northeastern Caribbean to aid in the determination of their location and intensity magnitudes

    USGS Publications Warehouse

    Flores, Claudia H.; ten Brink, Uri S.; Bakun, William H.

    2012-01-01

    Documentation of an event in the past depended on the population and political trends of the island, and the availability of historical documents is limited by the physical resource digitization schedule and by the copyright laws of each archive. Examples of documents accessed are governors' letters, newspapers, and other circulars published within the Caribbean, North America, and Western Europe. Key words were used to search for publications that contain eyewitness accounts of various large earthquakes. Finally, this catalog provides descriptions of damage to buildings used in previous studies for the estimation of moment intensity (MI) and location of significantly damaging or felt earthquakes in Hispaniola and in the northeastern Caribbean, all of which have been described in other studies.

  9. Statistical Evaluation of Efficiency and Possibility of Earthquake Predictions with Gravity Field Variation and its Analytic Signal in Western China

    NASA Astrophysics Data System (ADS)

    Chen, Shi; Jiang, Changsheng; Zhuang, Jiancang

    2016-01-01

    This paper aimed at assessing gravity variations as precursors for earthquake prediction in the Tibet (Xizang)-Qinghai-Xinjiang-Sichuan Region, western China. We here take a statistical approach to evaluate efficiency and possibility of earthquake prediction. We used the most recent spatiotemporal gravity field variation datasets of 2002-2008 for the region that were provided by the Crustal Movement Observation Network of China (CMONC). The datasets were space sparse and time discrete. In 2007-2010, 13 earthquakes (> M s 6.0) occurred in the region. The observed gravity variations have a statistical correlation with the occurrence of these earthquakes through the Molchan error diagram tests that lead to alarms over a good fraction of space-time. The results show that the prediction efficiency of amplitude of analytic signal of gravity variations is better than seismicity rate model and THD and absolute value of gravity variation, implying that gravity variations before earthquake may include precursory information of future large earthquakes.

  10. Generalized Free Surface Effect and Random Vibration Theory: A New Tool for Computing Moment Magnitudes of Small Earthquakes using Borehole Data

    NASA Astrophysics Data System (ADS)

    Malagnini, Luca; Dreger, Douglas S.

    2016-03-01

    Although optimal, computing the moment tensor solution is not always a viable option for the calculation of the size of an earthquake, especially for small events (say, below MW 2.0). Here we show an alternative approach to the calculation of the moment-rate spectra of small earthquakes, and thus of their scalar moments, that uses a network-based calibration of crustal wave propagation. The method works best when applied to a relatively small crustal volume containing both the seismic sources and the recording sites. In this study we present the calibration of the crustal volume monitored by the High Resolution Seismic Network (HRSN), along the San Andreas Fault (SAF) at Parkfield. After the quantification of the attenuation parameters within the crustal volume under investigation, we proceed to the spectral correction of the observed Fourier amplitude spectra for the 100 largest events in our data set. Multiple estimates of seismic moment for the all events (1811 events total) are obtained by calculating the ratio of rms-averaged spectral quantities based on the peak values of the ground velocity in the time domain, as they are observed in narrowband-filtered time series. The mathematical operations allowing the described spectral ratios are obtained from Random Vibration Theory (RVT). Due to the optimal conditions of the HRSN, in terms of signal-to-noise ratios, our network-based calibration allows the accurate calculation of seismic moments down to MW< 0. However, because the HRSN is equipped only with borehole instruments, we define a frequency-dependent Generalized Free-Surface Effect (GFSE), to be used instead of the usual free-surface constant F = 2. Our spectral corrections at Parkfield need a different GFSE for each side of the SAF, which can be quantified by means of the analysis of synthetic seismograms. The importance of the GFSE of borehole instruments increases for decreasing earthquake's size, because for smaller earthquakes the bandwidth available

  11. Thermal Infrared Anomalies of Several Strong Earthquakes

    PubMed Central

    Wei, Congxin; Guo, Xiao; Qin, Manzhong

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of “time-frequency relative power spectrum.” (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  12. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  13. The blink reflex magnitude is continuously adjusted according to both current and predicted stimulus position with respect to the face.

    PubMed

    Wallwork, Sarah B; Talbot, Kerwin; Camfferman, Danny; Moseley, G L; Iannetti, G D

    2016-08-01

    The magnitude of the hand-blink reflex (HBR), a subcortical defensive reflex elicited by the electrical stimulation of the median nerve, is increased when the stimulated hand is close to the face ('far-near effect'). This enhancement occurs through a cortico-bulbar facilitation of the polysynaptic medullary pathways subserving the reflex. Here, in two experiments, we investigated the temporal characteristics of this facilitation, and its adjustment during voluntary movement of the stimulated hand. Given that individuals navigate in a fast changing environment, one would expect the cortico-bulbar modulation of this response to adjust rapidly, and as a function of the predicted spatial position of external threats. We observed two main results. First, the HBR modulation occurs without a temporal delay between when the hand has reached the stimulation position and when the stimulus happens (Experiments 1 and 2). Second, the voluntary movement of the hand interacts with the 'far-near effect': stimuli delivered when the hand is far from the face elicit an enhanced HBR if the hand is being moved towards the face, whereas stimuli delivered when the hand is near the face elicit an enhanced HBR regardless of the direction of the hand movement (Experiment 2). These results indicate that the top-down modulation of this subcortical defensive reflex occurs continuously, and takes into account both the current and the predicted position of potential threats with respect to the body. The continuous control of the excitability of subcortical reflex circuits ensures appropriate adjustment of defensive responses in a rapidly-changing sensory environment. PMID:27236372

  14. Real-time prediction of earthquake ground motion using real-time monitoring, and improvement strategy of JMA EEW based on the lessons from M9 Tohoku Earthquake (Invited)

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.

    2013-12-01

    In this presentation, a new approach of real-time prediction of seismic ground motion for Earthquake Early Warning (EEW) is explained, in which real-time monitor is used but hypocentral location and magnitude are not required. Improvement strategy of the Japan Meteorological Agency (JMA) is also explained based on the lessons learned from the 2011 Tohoku Earthquake (Mw9.0). During the Tohoku Earthquake, EEW system of JMA issued warnings before the S-wave arrival and more than 15 s earlier than the strong ground motion in the Tohoku district. So it worked well as rapidly as designed. However, it under-predicted the seismic intensity for the Kanto district due to the very large extent of the fault rupture, and it issued some false alarms due to multiple simultaneous aftershocks. To address these problems, a new method of time-evolutional prediction is proposed that uses the real-time monitor of seismic wave propagation. This method makes it possible to predict ground motion without a hypocenter and magnitude. Effects of rupture directivity, source extent and simultaneous multiple events are substantially included in this method. In the time evolutional prediction, future wavefield is predicted from the wavefield at a certain time, that is u(x, t+Δt)=P(u(x, t)), where u is the wave motion at location x at lapse time t, and P is the prediction operator. The determination of detailed distribution of current wavefield is an important key, so that dense seismic observation network is required. Here, current wavefield, u(x, t), observed by the real time monitoring is used as the initial condition, and then wave propagation is predicted based on time evolutional approach. The method is based on the following three techniques. To enhance the estimation of the current wavefield, data assimilation is applied. The data assimilation is a technique to produce artificially denser network, which is widely used for numerical weather forecast and oceanography. Propagation is

  15. Paleoseismic investigations in the Santa Cruz mountains, California: Implications for recurrence of large-magnitude earthquakes on the San Andreas fault

    USGS Publications Warehouse

    Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.

    1998-01-01

    Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that

  16. Generalized Free-Surface Effect and Random Vibration Theory: a new tool for computing moment magnitudes of small earthquakes using borehole data

    NASA Astrophysics Data System (ADS)

    Malagnini, Luca; Dreger, Douglas S.

    2016-07-01

    Although optimal, computing the moment tensor solution is not always a viable option for the calculation of the size of an earthquake, especially for small events (say, below Mw 2.0). Here we show an alternative approach to the calculation of the moment-rate spectra of small earthquakes, and thus of their scalar moments, that uses a network-based calibration of crustal wave propagation. The method works best when applied to a relatively small crustal volume containing both the seismic sources and the recording sites. In this study we present the calibration of the crustal volume monitored by the High-Resolution Seismic Network (HRSN), along the San Andreas Fault (SAF) at Parkfield. After the quantification of the attenuation parameters within the crustal volume under investigation, we proceed to the spectral correction of the observed Fourier amplitude spectra for the 100 largest events in our data set. Multiple estimates of seismic moment for the all events (1811 events total) are obtained by calculating the ratio of rms-averaged spectral quantities based on the peak values of the ground velocity in the time domain, as they are observed in narrowband-filtered time-series. The mathematical operations allowing the described spectral ratios are obtained from Random Vibration Theory (RVT). Due to the optimal conditions of the HRSN, in terms of signal-to-noise ratios, our network-based calibration allows the accurate calculation of seismic moments down to Mw < 0. However, because the HRSN is equipped only with borehole instruments, we define a frequency-dependent Generalized Free-Surface Effect (GFSE), to be used instead of the usual free-surface constant F = 2. Our spectral corrections at Parkfield need a different GFSE for each side of the SAF, which can be quantified by means of the analysis of synthetic seismograms. The importance of the GFSE of borehole instruments increases for decreasing earthquake's size because for smaller earthquakes the bandwidth available

  17. [Comment on "Exaggerated claims about earthquake predictions: Analysis of NASA's method"] Pattern informatics and cellular seismology: A comparison of methods

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Tiampo, Kristy F.; Klein, William

    2007-06-01

    The recent article in Eos by Kafka and Ebel [2007] is a criticism of a NASA press release issued on 4 October 2004 describing an earthquake forecast (http://quakesim.jpl.nasa.gov/scorecard.html) based on a pattern informatics (PI) method [Rundle et al., 2002]. This 2002 forecast was a map indicating the probable locations of earthquakes having magnitude m>5.0 that would occur over the period of 1 January 2000 to 31 December 2009. Kafka and Ebel [2007] compare the Rundle et al. [2002] forecast to a retrospective analysis using a cellular seismology (CS) method. Here we analyze the performance of the Rundle et al. [2002] forecast using the first 15 of the m>5.0 earthquakes that occurred in the area covered by the forecasts.

  18. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  19. High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China

    PubMed Central

    Lui, Su; Huang, Xiaoqi; Chen, Long; Tang, Hehan; Zhang, Tijiang; Li, Xiuli; Li, Dongming; Kuang, Weihong; Chan, Raymond C.; Mechelli, Andrea; Sweeney, John A.; Gong, Qiyong

    2009-01-01

    Besides the enormous medical and economic consequences, national disasters, such as the Wenchuan 8.0 earthquake, also pose a risk to the mental health of survivors. In this context, a better understanding is needed of how functional brain systems adapt to severe emotional stress. Previous animal studies have demonstrated the importance of limbic, paralimbic, striatal, and prefrontal structures in stress and fear responses. Human studies, which have focused primarily on patients with clinically established posttraumatic stress disorders, have reported abnormalities in similar brain structures. At present, little is known about potential alterations of brain function in trauma survivors shortly after traumatic events. Here, we show alteration of brain function in a cohort of healthy survivors within 25 days after the Wenchuan earthquake by a recently discovered method known as “resting-state” functional MRI. The current investigation demonstrates that regional activity in frontolimbic and striatal areas increased significantly and connectivity among limbic and striatal networks was attenuated in our participants who had recently experienced severe emotional trauma. Trauma victims also had a reduced temporal synchronization within the “default mode” of resting-state brain function, which has been characterized in humans and other species. Taken together, our findings provide evidence that significant alterations in brain function, similar in many ways to those observed in posttraumatic stress disorders, can be seen shortly after major traumatic experiences, highlighting the need for early evaluation and intervention for trauma survivors. PMID:19720989

  20. Increased correlation range of seismicity before large events manifested by earthquake chains

    NASA Astrophysics Data System (ADS)

    Shebalin, P.

    2006-10-01

    "Earthquake chains" are clusters of moderate-size earthquakes which extend over large distances and are formed by statistically rare pairs of events that are close in space and time ("neighbors"). Earthquake chains are supposed to be precursors of large earthquakes with lead times of a few months. Here we substantiate this hypothesis by mass testing it using a random earthquake catalog. Also, we study stability under variation of parameters and some properties of the chains. We found two invariant parameters: they characterize the spatial and energy scales of earthquake correlation. Both parameters of the chains show good correlation with the magnitudes of the earthquakes they precede. Earthquake chains are known as the first stage of the earthquake prediction algorithm reverse tracing of precursors (RTP) now tested in forward prediction. A discussion of the complete RTP algorithm is outside the scope of this paper, but the results presented here are important to substantiate the RTP approach.

  1. Monitoring of fluid-rock interaction in active fault zones: a new method of earthquake prediction/forecasting?

    NASA Astrophysics Data System (ADS)

    Claesson, L.; Skelton, A.; Graham, C.; Dietl, C.; Morth, M.; Torssander, P.

    2003-12-01

    We propose a new method for earthquake forecasting based on the "prediction in hindsight" of a Mw 5.8 earthquake on Iceland, on September 16, 2002. The "prediction in hindsight" is based on geochemical monitoring of geothermal water at site HU-01 located within the Tj”rnes Fracture Zone, northern Iceland, before and after the earthquake. During the 4 weeks before the earthquake exponential (<800%) increases in the concentration of Cu, Zn and Fe in the fluid, was measured, together with a linear increase of Na/Ca and a slight increase of δ 18O. We relate the hydrogeochemical changes before the earthquake to influx of fluid which interacted with the host rock at higher temperatures and suggest that fluid flow was facilitated by stress-induced modification of rock permeability, which enabled more rapid fluid-rock interaction. Stepwise increases (13-35 %) in the concentration of, Ba, Ca, K, Li, Na, Rb, S, Si, Sr, Cl, Br and SO4 and negative shifts in δ 18O and δ D was detected in the fluid immediately after the earthquake, which we relate to seismically-induced source switching and consequent influx of older (or purer) ice age meteoric waters. The newly tapped source reservoir has a chemically and isotopically distinct ice-age meteoric water signature, which is the result of a longer residence in the crust. The immediancy of these changes is consistent with experimentally-derived timescales of fault-sealing in response to coupled deformation and fluid flow, interpreted as source-switching. These precursory changes may be used to "predict" the earthquake up to 2 weeks before it occurs.

  2. Short-term foreshock activity and its value for the earthquake prediction

    NASA Astrophysics Data System (ADS)

    Orfanogiannaki, Katerina; Daskalaki, Elena; Minadakis, George; Papadopoulos, Gerasimos

    2014-05-01

    Seismicity often occurs in space-time clusters: swarms, short-term foreshocks, aftershocks. Swarms are space-time clusters that do not conclude with a mainshock. Earthquake statistics shows that in areas of good seismicity monitoring foreshocks precede sizeable (M5.5 or more) mainshocks at a rate of about half percent. Therefore, discrimination between foreshocks and swarms is of crucial importance with the aim to use foreshocks as a diagnostic of forthcoming strong mainshock in real-time conditions. We analyzed seismic sequences in Greece and Italy with the application of our algorithm FORMA (Foreshocks-Mainshock-Aftershocks) and discriminate between foreshocks and swarms based on the seismicity significant changes in the space-time-magnitude domains. We support that different statistical properties is a diagnostic of foreshocks (e.g. b-value drop) against swarms (b-value increase). A complementary approach is based on the development of Poisson Hidden Markov Models (PHMM's) which are introduced to model significant temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poissonian with rate depending only on the current state of the chain. Thus, PHMM allows a region to have varying seismicity rate. PHMM is a promising diagnostic since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. A third methodological experiment was performed based on the complex network theory. We found that the earthquake networks examined form a scale-free degree distribution. By computing their basic statistical measures, such as the Average Clustering Coefficient, Mean Path Length and Entropy, we found that they underline the strong space-time clustering of swarms, foreshocks and aftershocks but also their important differences. Therefore, network theory is an additional, promising tool to

  3. Comment on "The directionality of acoustic T-phase signals from small magnitude submarine earthquakes" [J. Acoust. Soc. Am. 119, 3669-3675 (2006)].

    PubMed

    Bohnenstiehl, Delwayne R

    2007-03-01

    In a recent paper, Chapman and Marrett [J. Acoust. Soc. Am. 119, 3669-3675 (2006)] examined the tertiary (T-) waves associated with three subduction-related earthquakes within the South Fiji Basin. In that paper it is argued that acoustic energy is radiated into the sound channel by downslope propagation along abyssal seamounts and ridges that lie distant to the epicenter. A reexamination of the travel-time constraints indicates that this interpretation is not well supported. Rather, the propagation model that is described would require the high-amplitude T-wave components to be sourced well to the east of the region identified, along a relatively flat-lying seafloor. PMID:17407863

  4. Heart Rate and Heart Rate Variability Assessment Identifies Individual Differences in Fear Response Magnitudes to Earthquake, Free Fall, and Air Puff in Mice

    PubMed Central

    Kuang, Hui; Tsien, Joe Z.; Zhao, Fang

    2014-01-01

    Fear behaviors and fear memories in rodents have been traditionally assessed by the amount of freezing upon the presentation of conditioned cues or unconditioned stimuli. However, many experiences, such as encountering earthquakes or accidental fall from tree branches, may produce long-lasting fear memories but are behaviorally difficult to measure using freezing parameters. Here, we have examined changes in heartbeat interval dynamics as physiological readout for assessing fearful reactions as mice were subjected to sudden air puff, free-fall drop inside a small elevator, and a laboratory-version earthquake. We showed that these fearful events rapidly increased heart rate (HR) with simultaneous reduction of heart rate variability (HRV). Cardiac changes can be further analyzed in details by measuring three distinct phases: namely, the rapid rising phase in HR, the maximum plateau phase during which HRV is greatly decreased, and the recovery phase during which HR gradually recovers to baseline values. We showed that durations of the maximum plateau phase and HR recovery speed were quite sensitive to habituation over repeated trials. Moreover, we have developed the fear resistance index based on specific cardiac response features. We demonstrated that the fear resistance index remained largely consistent across distinct fearful events in a given animal, thereby enabling us to compare and rank individual mouse’s fear responsiveness among the group. Therefore, the fear resistance index described here can represent a useful parameter for measuring personality traits or individual differences in stress-susceptibility in both wild-type mice and post-traumatic stress disorder (PTSD) models. PMID:24667366

  5. Forecasting magnitude, time, and location of aftershocks for aftershock hazard

    NASA Astrophysics Data System (ADS)

    Chen, K.; Tsai, Y.; Huang, M.; Chang, W.

    2011-12-01

    In this study we investigate the spatial and temporal seismicity parameters of the aftershock sequence accompanying the 17:47 20 September 1999 (UTC) 7.45 Chi-Chi earthquake Taiwan. Dividing the epicentral zone into north of the epicenter, at the epicenter, and south of the epicenter, it is found that immediately after the earthquake the area close by the epicenter had a lower value than both the northern and southern sections. This pattern suggests that at the time of the Chi-Chi earthquake, the area close by the epicenter remained prone to large magnitude aftershocks and strong shaking. However, with time the value increases. An increasing value indicates a reduced likelihood of large magnitude aftershocks. The study also shows that the value is higher at the southern section of the epicentral zone, indicating a faster rate of decay in this section. The primary purpose of this paper is to design a predictive model for forecasting the magnitude, time, and location of aftershocks to large earthquakes. The developed model is presented and applied to the 17:47 20 September 1999 7.45 Chi-Chi earthquake Taiwan, and the 09:32 5 November 2009 (UTC) Nantou 6.19, and 00:18 4 March 2010 (UTC) Jiashian 6.49 earthquake sequences. In addition, peak ground acceleration trends for the Nantou and Jiashian aftershock sequences are predicted and compared to actual trends. The results of the estimated peak ground acceleration are remarkably similar to calculations from recorded magnitudes in both trend and level. To improve the predictive skill of the model for occurrence time, we use an empirical relation to forecast the time of aftershocks. The empirical relation improves time prediction over that of random processes. The results will be of interest to seismic mitigation specialists and rescue crews. We apply also the parameters and empirical relation from Chi-Chi aftershocks of Taiwan to forecast aftershocks with magnitude M > 6.0 of 05:46 11 March 2011 (UTC) Tohoku 9

  6. Predicting earthquakes along the major plate tectonic boundaries in the Pacific

    USGS Publications Warehouse

    Spall, H.

    1978-01-01

    In an article in the last issue of the Earthquake Information Bulletin ("Earthquakes and Plate Tectonics," by Henry Spall), we saw how 90 percent of the world's earthquakes occur at the margins of the Earth's major crustal plates. however, when we look at the distribution of earthquakes in detail, we see that a number of nearly aseismic regions, or seismic gaps, can be found along the present-day plate boundaries. Why is this? And can we regard these areas as being more likely to be the sites for future larger earthquakes than those segments of the plate boundaries that have ruptured recently. 

  7. Real-time prediction of earthquake ground motion: application of data assimilation and real-time correction of site amplification factors

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.

    2013-12-01

    In this presentation, I explain the data assimilation technique and real-time correction of frequency-dependent site amplification factor for time evolutional prediction of seismic ground motion (waveforms) which is applicable to Earthquake Early Warning (EEW). At present, many methods of EEW determine hypocenter and magnitude (source parameters) rapidly at first, and then ground motion is predicted using the hypocenter and magnitude. The warnings are issued depending on the strength of the predicted ground motion. In this method, however, it is not easy to take source extent and the effects of rupture directivity into account. Although S-wave arrival does not necessary means the start of the strong motion for large earthquakes as experienced during the 2011 Tohoku earthquake (Mw9.0), it is hard to predict the time evolution of ground motion strength. In general, wave motion is predictable when boundary condition and initial condition are given. Time evolutional prediction is a method based on this approach using the current wavefield as an initial condition, that is u(x, t+Δt)=P(u(x, t)), where u is the wave motion at location x at lapse time t, and P is the prediction operator. Future wave motion, u(x, t+Δt), is predicted using P from the distribution of the current wavefield, u(x, t). For P, finite difference technique or boundary integral equation method, such as Kirchhoff integral, is used. The time evolutional prediction enables us to predict the time evolution of ground motion strength. In the time evolutional prediction, determination of detailed distribution of current wave motion is a key, so that dense seismic observation network is required. Data assimilation is a technique to produce artificially denser network, which is widely used for numerical weather forecast and oceanography. Distribution of current wave motion is estimated from not only the current actual observation of u(x, t), but also the prediction of one step before, P(u(x, t

  8. On the statistical analysis of maximal magnitude

    NASA Astrophysics Data System (ADS)

    Holschneider, M.; Zöller, G.; Hainzl, S.

    2012-04-01

    We show how the maximum expected magnitude within a time horizon [0,T] may be estimated from earthquake catalog data within the context of truncated Gutenberg-Richter statistics. We present the results in a frequentist and in a Bayesian setting. Instead of deriving point estimations of this parameter and reporting its performance in terms of expectation value and variance, we focus on the calculation of confidence intervals based on an imposed level of confidence α. We present an estimate of the maximum magnitude within an observational time interval T in the future, given a complete earthquake catalog for a time period Tc in the past and optionally some paleoseismic events. We argue that from a statistical point of view the maximum magnitude in a time window is a reasonable parameter for probabilistic seismic hazard assessment, while the commonly used maximum possible magnitude for all times does almost certainly not allow the calculation of useful (i.e. non-trivial) confidence intervals. In the context of an unbounded GR law we show, that Jeffreys invariant prior distribtution yields normalizable posteriors. The predictive distribution based on this prior is explicitely computed.

  9. High-Magnitude (>Mw8.0) Megathrust Earthquakes and the Subduction of Thick Sediment, Tectonic Debris, and Smooth Sea Floor

    NASA Astrophysics Data System (ADS)

    Scholl, D. W.; Kirby, S. H.; von Huene, R.; Ryan, H. F.; Wells, R. E.

    2014-12-01

    INTRODUCTION: Ruff (1989, Pure and Applied Geophysics, v. 129) proposed that thick or excess sediment entering the subduction zone (SZ) smooths and strengthens the trench-parallel distribution of interplate coupling strength. This circumstance was conjectured to favor rupture continuation and the generation interplate thrusts (IPTs) of magnitude >Mw8.2. But, statistically, the correlation of excess sediment and high magnitude IPTs was deemed "less than compelling". NEW OBSERVATIONS: Using a larger and better vetted catalog of instrumental era (1899 through Jan. 2013) IPTs of magnitude Mw7.5 to 9.5 (n=176), and a far more accurate compilation of trench sediment thickness, we tested if, in fact, a compelling correlation exists between the occurrence of great IPTs and where thick (>1.0-1.5 km) vs thin (<1.0-0.5 km) sedimentary sections enter the SZ. Based on the new compilations, a statistically supported statement can be made that great megathrusts are most prone to nucleate at well-sedimented SZs. Despite the shorter (by 7500 km) global length of thick- (vs thin) sediment trenches, ~53% of all instrumental events of magnitude >Mw8.0, ~75% of events >Mw8.5, and 100% of IPTs >Mw9.0 occurred at thick-sediment trenches. No event >Mw9.0 ruptured at thin-sediment trenches, three super giant IPTs (1960 Chile Mw9.5, 1964 Alaska Mw9.2, and 2004 Sumatra Mw9.2) occurred at thick-sediment trenches. Significantly, however, large Mw8.0-9.0 events also commonly (n=23) nucleated at thin-sediment trenches. These IPTs are associated with the subduction of low-relief oceanic crust and where the debris of subduction erosion thickens the subduction channel separating the two plates. INFERENCES: Our new, larger, and corrected date compilations support the conjecture by Ruff (1989) that subduction of a thick section of sediment favors rupture continuation and nucleation of high magnitude Mw8.0 to 9.5 IPTs. This observation can be linked to a causative mechanism of sediment

  10. Predictability of catastrophic events: Material rupture, earthquakes, turbulence, financial crashes, and human birth

    PubMed Central

    Sornette, Didier

    2002-01-01

    We propose that catastrophic events are “outliers” with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. We describe a unifying approach for modeling and predicting these catastrophic events or “ruptures,” that is, sudden transitions from a quiescent state to a crisis. Such ruptures involve interactions between structures at many different scales. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence, and abrupt changes of weather regimes, financial crashes, and human parturition (birth). Future improvements will involve combining ideas and tools from statistical physics and artificial/computational intelligence, to identify and classify possible universal structures that occur at different scales, and to develop application-specific methodologies to use these structures for prediction of the “crises” known to arise in each application of interest. We live on a planet and in a society with intermittent dynamics rather than a state of equilibrium, and so there is a growing and urgent need to sensitize students and citizens to the importance and impacts of ruptures in their multiple forms. PMID:11875205

  11. Late Holocene slip rate of the San Andreas fault and its accommodation by creep and moderate-magnitude earthquakes at Parkfield, California

    USGS Publications Warehouse

    Toke, N.A.; Arrowsmith, J.R.; Rymer, M.J.; Landgraf, A.; Haddad, D.E.; Busch, M.; Coyan, J.; Hannah, A.

    2011-01-01

    Investigation of a right-laterally offset channel at the Miller's Field paleoseismic site yields a late Holocene slip rate of 26.2 +6.4/-4.3 mm/yr (1??) for the main trace of the San Andreas fault at Park-field, California. This is the first well-documented geologic slip rate between the Carrizo and creeping sections of the San Andreas fault. This rate is lower than Holocene measurements along the Carrizo Plain and rates implied by far-field geodetic measurements (~35 mm/yr). However, the rate is consistent with historical slip rates, measured to the northwest, along the creeping section of the San Andreas fault (<30 mm/yr). The paleoseismic exposures at the Miller's Field site reveal a pervasive fabric of clay shear bands, oriented clockwise oblique to the San Andreas fault strike and extending into the upper-most stratigraphy. This fabric is consistent with dextral aseismic creep and observations of surface slip from the 28 September 2004 M6 Parkfield earthquake. Together, this slip rate and deformation fabric suggest that the historically observed San Andreas fault slip behavior along the Parkfield section has persisted for at least a millennium, and that significant slip is accommodated by structures in a zone beyond the main San Andreas fault trace. ?? 2011 Geological Society of America.

  12. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  13. Anomalous phenomena in Schumann resonance band observed in China before the 2011 magnitude 9.0 Tohoku-Oki earthquake in Japan

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjuan; Zhou, Zhiquan; Qiao, Xiaolin; Yu, Haiyan

    2013-12-01

    anomalous phenomena in the Schumann resonance (SR) band, possibly associated with the Tohoku-Oki earthquake (EQ), are studied based on the ELF observations at two stations in China. The anomaly appeared on 8 March, 3 days prior to the main shock, and was characterized by an increase in the intensity at frequencies from the first mode to the fourth mode in both magnetic field components, different from the observations in Japan before large EQs in Taiwan. The abnormal behaviors of the north-south and east-west magnetic field components primarily appeared at 0000-0900 UT and 0200-0900 UT on 8 March, respectively. The finite difference time domain numerical method is applied to model the impact of seismic process on the ELF radio propagation. A partially uniform knee model of the vertical conductivity profile suggested by V. C. Mushtak is used to model the day-night asymmetric Earth-ionosphere cavity, and a locally EQ-induced disturbance model of the atmospheric conductivity is introduced. The atmospheric conductivity is assumed to increase around the epicenter according to the localized enhancement of total electron content in the ionosphere. It is concluded that the SR anomalous phenomena before the Tohoku-Oki EQ have much to do with the excited sources located at South America and Asia and also with the localized distribution of the disturbed conductivity. This work is a further confirmation of the relationship of SR anomalies with large EQs and has further concluded that the distortions in the SR band before large EQs may be caused by the irregularities located over the shock epicenter in the Earth-ionosphere cavity by numerical method.

  14. In-situ fluid-pressure measurements for earthquake prediction: An example from a deep well at Hi Vista, California

    USGS Publications Warehouse

    Healy, J.H.; Urban, T.C.

    1985-01-01

    Short-term earthquake prediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present. ?? 1985 Birkha??user Verlag.

  15. In-situ fluid-pressure measurements for earthquake prediction: An example from a deep well at Hi Vista, California

    NASA Astrophysics Data System (ADS)

    Healy, John H.; Urban, T. C.

    1984-03-01

    Short-term earthquake prediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present.

  16. Ground Motions Due to Earthquakes on Creeping Faults

    NASA Astrophysics Data System (ADS)

    Harris, R.; Abrahamson, N. A.

    2014-12-01

    We investigate the peak ground motions from the largest well-recorded earthquakes on creeping strike-slip faults in active-tectonic continental regions. Our goal is to evaluate if the strong ground motions from earthquakes on creeping faults are smaller than the strong ground motions from earthquakes on locked faults. Smaller ground motions might be expected from earthquakes on creeping faults if the fault sections that strongly radiate energy are surrounded by patches of fault that predominantly absorb energy. For our study we used the ground motion data available in the PEER NGA-West2 database, and the ground motion prediction equations that were developed from the PEER NGA-West2 dataset. We analyzed data for the eleven largest well-recorded creeping-fault earthquakes, that ranged in magnitude from M5.0-6.5. Our findings are that these earthquakes produced peak ground motions that are statistically indistinguishable from the peak ground motions produced by similar-magnitude earthquakes on locked faults. These findings may be implemented in earthquake hazard estimates for moderate-size earthquakes in creeping-fault regions. Further investigation is necessary to determine if this result will also apply to larger earthquakes on creeping faults. Please also see: Harris, R.A., and N.A. Abrahamson (2014), Strong ground motions generated by earthquakes on creeping faults, Geophysical Research Letters, vol. 41, doi:10.1002/2014GL060228.

  17. Remote triggering of deep earthquakes in the 2002 Tonga sequences.

    PubMed

    Tibi, Rigobert; Wiens, Douglas A; Inoue, Hiroshi

    2003-08-21

    It is well established that an earthquake in the Earth's crust can trigger subsequent earthquakes, but such triggering has not been documented for deeper earthquakes. Models for shallow fault interactions suggest that static (permanent) stress changes can trigger nearby earthquakes, within a few fault lengths from the causative earthquake, whereas dynamic (transient) stresses carried by seismic waves may trigger earthquakes both nearby and at remote distances. Here we present a detailed analysis of the 19 August 2002 Tonga deep earthquake sequences and show evidence for both static and dynamic triggering. Seven minutes after a magnitude 7.6 earthquake occurred at a depth of 598 km, a magnitude 7.7 earthquake (664 km depth) occurred 300 km away, in a previously aseismic region. We found that nearby aftershocks of the first mainshock are preferentially located in regions where static stresses are predicted to have been enhanced by the mainshock. But the second mainshock and other triggered events are located at larger distances where static stress increases should be negligible, thus suggesting dynamic triggering. The origin times of the triggered events do not correspond to arrival times of the main seismic waves from the mainshocks and the dynamically triggered earthquakes frequently occur in aseismic regions below or adjacent to the seismic zone. We propose that these events are triggered by transient effects in regions near criticality, but where earthquakes have difficulty nucleating without external influences. PMID:12931183

  18. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  19. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes. PMID:17738277

  20. Finite difference simulations of seismic wave propagation for understanding earthquake physics and predicting ground motions: Advances and challenges

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Ducellier, Ariane; Dupros, Fabrice; Michea, David

    2013-08-01

    Seismic waves radiated from an earthquake propagate in the Earth and the ground shaking is felt and recorded at (or near) the ground surface. Understanding the wave propagation with respect to the Earth's structure and the earthquake mechanisms is one of the main objectives of seismology, and predicting the strong ground shaking for moderate and large earthquakes is essential for quantitative seismic hazard assessment. The finite difference scheme for solving the wave propagation problem in elastic (sometimes anelastic) media has been more widely used since the 1970s than any other numerical methods, because of its simple formulation and implementation, and its easy scalability to large computations. This paper briefly overviews the advances in finite difference simulations, focusing particularly on earthquake mechanics and the resultant wave radiation in the near field. As the finite difference formulation is simple (interpolation is smooth), an easy coupling with other approaches is one of its advantages. A coupling with a boundary integral equation method (BIEM) allows us to simulate complex earthquake source processes.

  1. Procedure of evaluating parameters of inland earthquakes caused by long strike-slip faults for ground motion prediction

    NASA Astrophysics Data System (ADS)

    Ju, Dianshu; Dan, Kazuo; Fujiwara, Hiroyuki; Morikawa, Nobuyuki

    2016-04-01

    We proposed a procedure of evaluating fault parameters of asperity models for predicting strong ground motions from inland earthquakes caused by long strike-slip faults. In order to obtain averaged dynamic stress drops, we adopted the formula obtained by dynamic fault rupturing simulations for surface faults of the length from 15 to 100 km, because the formula of the averaged static stress drops for circular cracks, commonly adopted in existing procedures, cannot be applied to surface faults or long faults. The averaged dynamic stress drops were estimated to be 3.4 MPa over the entire fault and 12.2 MPa on the asperities, from the data of 10 earthquakes in Japan and 13 earthquakes in other countries. The procedure has a significant feature that the average slip on the seismic faults longer than about 80 km is constant, about 300 cm. In order to validate our proposed procedure, we made a model for a 141 km long strike-slip fault by our proposed procedure for strike-slip faults, predicted ground motions, and showed that the resultant motions agreed well with the records of the 1999 Kocaeli, Turkey, earthquake (Mw 7.6) and with the peak ground accelerations and peak ground velocities by the GMPE of Si and Midorikawa (1999).

  2. De-confounding of Relations Between Land-Level and Sea-Level Change, Humboldt Bay, Northern California: Uncertain Predictions of Magnitude and Timing of Tectonic and Eustatic Processes

    NASA Astrophysics Data System (ADS)

    Gilkerson, W.; Leroy, T. H.; Patton, J. R.; Williams, T. B.

    2010-12-01

    Humboldt Bay in Northern California provides a unique opportunity to investigate the effects of relative sea level change on both native flora and maritime aquiculture as influenced by both tectonic and eustatic sea-level changes. This combination of superposed influences makes quantitatively predicting relative sea-level more uncertain and consumption of the results for public planning purposes exceedingly difficult. Public digestion for practical purposes is confounded by the fact that the uncertainty for eustatic sea-level changes is a magnitude issue while the uncertainty associated with the tectonic land level changes is both a magnitude and timing problem. Secondly, the public is less well informed regarding how crustal deformation contributes to relative sea-level change. We model the superposed effects of eustatic sea-level rise and tectonically driven land-level changes on the spatial distribution of habitats suitable to native eelgrass (Zostera marina) and oyster mariculture operations in Humboldt Bay. While these intertidal organisms were chosen primarily because they have vertically restricted spatial distributions that can be successfully modeled, the public awareness of their ecologic and economic importance is also well developed. We employ easy to understand graphics depicting conceptual ideas along with maps generated from the modeling results to develop locally relevant estimates of future sea level rise over the next 100 years, a time frame consistent with local planning. We bracket these estimates based on the range of possible vertical deformation changes. These graphic displays can be used as a starting point to propose local outcomes from global and regional relative sea-level changes with respect to changes in the distribution of suitable habitat for ecologically and economically valuable species. Currently the largest sources of uncertainty for changes in relative sea-level in the Humboldt Bay area are 1) the rate and magnitude of tectonic

  3. Magnitude of daily energy deficit predicts frequency but not severity of menstrual disturbances associated with exercise and caloric restriction

    PubMed Central

    Leidy, Heather J.; Hill, Brenna R.; Lieberman, Jay L.; Legro, Richard S.; Souza, Mary Jane De

    2014-01-01

    We assessed the impact of energy deficiency on menstrual function using controlled feeding and supervised exercise over four menstrual cycles (1 baseline and 3 intervention cycles) in untrained, eumenorrheic women aged 18–30 yr. Subjects were randomized to either an exercising control (EXCON) or one of three exercising energy deficit (ED) groups, i.e., mild (ED1; −8 ± 2%), moderate (ED2; −22 ± 3%), or severe (ED3; −42 ± 3%). Menstrual cycle length and changes in urinary concentrations of estrone-1-glucuronide, pregnanediol glucuronide, and midcycle luteinizing hormone were assessed. Thirty-four subjects completed the study. Weight loss occurred in ED1 (−3.8 ± 0.2 kg), ED2 (−2.8 ± 0.6 kg)