Science.gov

Sample records for earthquake magnitude prediction

  1. Are Earthquake Magnitudes Clustered?

    SciTech Connect

    Davidsen, Joern; Green, Adam

    2011-03-11

    The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.

  2. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries

    PubMed Central

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

  3. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries.

    PubMed

    Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.

  4. Neural network models for earthquake magnitude prediction using multiple seismicity indicators.

    PubMed

    Panakkat, Ashif; Adeli, Hojjat

    2007-02-01

    Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.

  5. Earthquake prediction

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.

  6. Influence of Time and Space Correlations on Earthquake Magnitude

    SciTech Connect

    Lippiello, E.; Arcangelis, L. de; Godano, C.

    2008-01-25

    A crucial point in the debate on the feasibility of earthquake predictions is the dependence of an earthquake magnitude from past seismicity. Indeed, while clustering in time and space is widely accepted, much more questionable is the existence of magnitude correlations. The standard approach generally assumes that magnitudes are independent and therefore in principle unpredictable. Here we show the existence of clustering in magnitude: earthquakes occur with higher probability close in time, space, and magnitude to previous events. More precisely, the next earthquake tends to have a magnitude similar but smaller than the previous one. A dynamical scaling relation between magnitude, time, and space distances reproduces the complex pattern of magnitude, spatial, and temporal correlations observed in experimental seismic catalogs.

  7. BNL PREDICTION OF NUPECS FIELD MODEL TESTS OF NPP STRUCTURES SUBJECT TO SMALL TO MODERATE MAGNITUDE EARTHQUAKES.

    SciTech Connect

    XU,J.; COSTANTINO,C.; HOFMAYER,C.; MURPHY,A.; KITADA,Y.

    2003-08-17

    As part of a verification test program for seismic analysis codes for NPP structures, the Nuclear Power Engineering Corporation (NUPEC) of Japan has conducted a series of field model test programs to ensure the adequacy of methodologies employed for seismic analyses of NPP structures. A collaborative program between the United States and Japan was developed to study seismic issues related to NPP applications. The US Nuclear Regulatory Commission (NRC) and its contractor, Brookhaven National Laboratory (BNL), are participating in this program to apply common analysis procedures to predict both free field and soil-structure interaction (SSI) responses to recorded earthquake events, including embedment and dynamic cross interaction (DCI) effects. This paper describes the BNL effort to predict seismic responses of the large-scale realistic model structures for reactor and turbine buildings at the NUPEC test facility in northern Japan. The NUPEC test program has collected a large amount of recorded earthquake response data (both free-field and in-structure) from these test model structures. The BNL free-field analyses were performed with the CARES program while the SSI analyses were preformed using the SASS12000 computer code. The BNL analysis includes both embedded and excavated conditions, as well as the DCI effect, The BNL analysis results and their comparisons to the NUPEC recorded responses are presented in the paper.

  8. Maximum magnitude earthquakes induced by fluid injection

    NASA Astrophysics Data System (ADS)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  9. The magnitude distribution of dynamically triggered earthquakes

    NASA Astrophysics Data System (ADS)

    Hernandez, Stephen

    Large dynamic strains carried by seismic waves are known to trigger seismicity far from their source region. It is unknown, however, whether surface waves trigger only small earthquakes, or whether they can also trigger large, societally significant earthquakes. To address this question, we use a mixing model approach in which total seismicity is decomposed into 2 broad subclasses: "triggered" events initiated or advanced by far-field dynamic strains, and "untriggered" spontaneous events consisting of everything else. The b-value of a mixed data set, b MIX, is decomposed into a weighted sum of b-values of its constituent components, bT and bU. For populations of earthquakes subjected to dynamic strain, the fraction of earthquakes that are likely triggered, f T, is estimated via inter-event time ratios and used to invert for bT. The confidence bounds on b T are estimated by multiple inversions of bootstrap resamplings of bMIX and fT. For Californian seismicity, data are consistent with a single-parameter Gutenberg-Richter hypothesis governing the magnitudes of both triggered and untriggered earthquakes. Triggered earthquakes therefore seem just as likely to be societally significant as any other population of earthquakes.

  10. Precise Relative Earthquake Magnitudes from Cross Correlation

    DOE PAGESBeta

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  11. Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2016-07-01

    We have obtained new results in the statistical analysis of global earthquake catalogues with special attention to the largest earthquakes, and we examined the statistical behaviour of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the `Global Earthquake Activity Rate 1' model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 × 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalogue from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a `General Earthquake Model' (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalogue of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalogue of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation

  12. A new macroseismic intensity prediction equation and magnitude estimates of the 1811-1812 New Madrid and 1886 Charleston, South Carolina, earthquakes

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2013-12-01

    We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial

  13. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  14. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, S.E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  15. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  16. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  17. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  18. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  19. Threshold magnitude for Ionospheric TEC response to earthquakes

    NASA Astrophysics Data System (ADS)

    Perevalova, N. P.; Sankov, V. A.; Astafyeva, E. I.; Zhupityaeva, A. S.

    2014-02-01

    We have analyzed ionospheric response to earthquakes with magnitudes of 4.1-8.8 which occurred under quiet geomagnetic conditions in different regions of the world (the Baikal region, Kuril Islands, Japan, Greece, Indonesia, China, New Zealand, Salvador, and Chile). This investigation relied on measurements of total electron content (TEC) variations made by ground-based dual-frequency GPS receivers. To perform the analysis, we selected earthquakes with permanent GPS stations installed close by. Data processing has revealed that after 4.1-6.3-magnitude earthquakes wave disturbances in TEC variations are undetectable. We have thoroughly analyzed publications over the period of 1965-2013 which reported on registration of wave TIDs after earthquakes. This analysis demonstrated that the magnitude of the earthquakes having a wave response in the ionosphere was no less than 6.5. Based on our results and on the data from other researchers, we can conclude that there is a threshold magnitude (near 6.5) below which there are no pronounced earthquake-induced wave TEC disturbances. The probability of detection of post-earthquake TIDs with a magnitude close to the threshold depends strongly on geophysical conditions. In addition, reliable identification of the source of such TIDs generally requires many GPS stations in an earthquake zone. At low magnitudes, seismic energy is likely to be insufficient to generate waves in the neutral atmosphere which are able to induce TEC disturbances observable at the level of background fluctuations.

  20. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  1. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    NASA Astrophysics Data System (ADS)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  2. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  3. Source time function properties indicate a strain drop independent of earthquake depth and magnitude.

    PubMed

    Vallée, Martin

    2013-01-01

    The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes. PMID:24126256

  4. Spatial Seismicity Rates and Maximum Magnitudes for Background Earthquakes

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Frankel, Arthur D.; Zeng, Yuehua

    2008-01-01

    The background seismicity model is included to account for M 5.0 - 6.5 earthquakes on faults and for random M 5.0 ? 7.0 earthquakes that do not occur on faults included in the model (as in earlier models of Frankel et al., 1996, 2002 and Petersen et al., 1996). We include four different classes of earthquake sources in the California background seismicity model: (1) gridded (smoothed) seismicity, (2) regional background zones, (3) special fault zone models, and (4) shear zones (also referred to as C zones). The gridded (smoothed) seismicity model, the regional background zone model, and the special fault zones use a declustered earthquake catalog for calculation of earthquake rates. Earthquake rates in shear zones are estimated from the geodetically determined rate of deformation across an area of high strain rate. We use a truncated exponential (Gutenberg-Richter, 1944) magnitude-frequency distribution to account for earthquakes in the background models.

  5. On earthquake prediction in Japan.

    PubMed

    Uyeda, Seiya

    2013-01-01

    Japan's National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author's view, are mainly interested in securing funds for seismology - on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  6. On Earthquake Prediction in Japan

    PubMed Central

    UYEDA, Seiya

    2013-01-01

    Japan’s National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  7. Correlating precursory declines in groundwater radon with earthquake magnitude.

    PubMed

    Kuo, T

    2014-01-01

    Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible.

  8. The magnitude distribution of earthquakes near Southern California faults

    USGS Publications Warehouse

    Page, M.T.; Alderson, D.; Doyle, J.

    2011-01-01

    We investigate seismicity near faults in the Southern California Earthquake Center Community Fault Model. We search for anomalously large events that might be signs of a characteristic earthquake distribution. We find that seismicity near major fault zones in Southern California is well modeled by a Gutenberg-Richter distribution, with no evidence of characteristic earthquakes within the resolution limits of the modern instrumental catalog. However, the b value of the locally observed magnitude distribution is found to depend on distance to the nearest mapped fault segment, which suggests that earthquakes nucleating near major faults are likely to have larger magnitudes relative to earthquakes nucleating far from major faults. Copyright 2011 by the American Geophysical Union.

  9. Predicting Strong Ground-Motion Seismograms for Magnitude 9 Cascadia Earthquakes Using 3D Simulations with High Stress Drop Sub-Events

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Wirth, E. A.; Stephenson, W. J.; Moschetti, M. P.; Ramirez-Guzman, L.

    2015-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for magnitude 9.0 earthquakes on the Cascadia subduction zone by combining synthetics from simulations with a 3D velocity model at low frequencies (≤ 1 Hz) with stochastic synthetics at high frequencies (≥ 1 Hz). We use a compound rupture model consisting of a set of M8 high stress drop sub-events superimposed on a background slip distribution of up to 20m that builds relatively slowly. The 3D simulations were conducted using a finite difference program and the finite element program Hercules. The high-frequency (≥ 1 Hz) energy in this rupture model is primarily generated in the portion of the rupture with the M8 sub-events. In our initial runs, we included four M7.9-8.2 sub-events similar to those that we used to successfully model the strong ground motions recorded from the 2010 M8.8 Maule, Chile earthquake. At periods of 2-10 s, the 3D synthetics exhibit substantial amplification (about a factor of 2) for sites in the Puget Lowland and even more amplification (up to a factor of 5) for sites in the Seattle and Tacoma sedimentary basins, compared to rock sites outside of the Puget Lowland. This regional and more localized basin amplification found from the simulations is supported by observations from local earthquakes. There are substantial variations in the simulated M9 time histories and response spectra caused by differences in the hypocenter location, slip distribution, down-dip extent of rupture, coherence of the rupture front, and location of sub-events. We examined the sensitivity of the 3D synthetics to the velocity model of the Seattle basin. We found significant differences in S-wave focusing and surface wave conversions between a 3D model of the basin from a spatially-smoothed tomographic inversion of Rayleigh-wave phase velocities and a model that has an abrupt southern edge of the Seattle basin, as observed in seismic reflection profiles.

  10. Earthquake Prediction is Coming

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)

  11. Magnitude-frequency distribution of volcanic explosion earthquakes

    NASA Astrophysics Data System (ADS)

    Nishimura, Takeshi; Iguchi, Masato; Hendrasto, Mohammad; Aoyama, Hiroshi; Yamada, Taishi; Ripepe, Maurizio; Genco, Riccardo

    2016-07-01

    Magnitude-frequency distributions of volcanic explosion earthquakes that are associated with occurrences of vulcanian and strombolian eruptions, or gas burst activity, are examined at six active volcanoes. The magnitude-frequency distribution at Suwanosejima volcano, Japan, shows a power-law distribution, which implies self-similarity in the system, as is often observed in statistical characteristics of tectonic and volcanic earthquakes. On the other hand, the magnitude-frequency distributions at five other volcanoes, Sakurajima and Tokachi-dake in Japan, Semeru and Lokon in Indonesia, and Stromboli in Italy, are well explained by exponential distributions. The statistical features are considered to reflect source size, as characterized by a volcanic conduit or chamber. Earthquake generation processes associated with vulcanian, strombolian and gas burst events are different from those of eruptions ejecting large amounts of pyroclasts, since the magnitude-frequency distribution of the volcanic explosivity index is generally explained by the power law.

  12. Physics-based estimates of maximum magnitude of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin

    2016-04-01

    In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.

  13. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  14. Newmark design spectra considering earthquake magnitudes and site categories

    NASA Astrophysics Data System (ADS)

    Li, Bo; Xie, Wei-Chau; Pandey, M. D.

    2016-09-01

    Newmark design spectra have been implemented in many building codes, especially in building codes for critical structures. Previous studies show that Newmark design spectra exhibit lower amplitudes at high frequencies and larger amplitudes at low frequencies in comparison with spectra developed by statistical methods. To resolve this problem, this study considers three suites of ground motions recorded at three types of sites. Using these ground motions, influences of the shear-wave velocity, earthquake magnitudes, source-to-site distances on the ratios of ground motion parameters are studied, and spectrum amplification factors are statistically calculated. Spectral bounds for combinations of three site categories and two cases of earthquake magnitudes are estimated. Site design spectrum coefficients for the three site categories considering earthquake magnitudes are established. The problems of Newmark design spectra could be resolved by using the site design spectrum coefficients to modify the spectral values of Newmark design spectra in the acceleration sensitive, velocity sensitive, and displacement sensitive regions.

  15. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  16. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  17. Prediction of earthquake response spectra

    USGS Publications Warehouse

    Joyner, W.B.; Boore, David M.

    1982-01-01

    We have developed empirical equations for predicting earthquake response spectra in terms of magnitude, distance, and site conditions, using a two-stage regression method similar to the one we used previously for peak horizontal acceleration and velocity. We analyzed horizontal pseudo-velocity response at 5 percent damping for 64 records of 12 shallow earthquakes in Western North America, including the recent Coyote Lake and Imperial Valley, California, earthquakes. We developed predictive equations for 12 different periods between 0.1 and 4.0 s, both for the larger of two horizontal components and for the random horizontal component. The resulting spectra show amplification at soil sites compared to rock sites for periods greater than or equal to 0.3 s, with maximum amplification exceeding a factor of 2 at 2.0 s. For periods less than 0.3 s there is slight deamplification at the soil sites. These results are generally consistent with those of several earlier studies. A particularly significant aspect of the predicted spectra is the change of shape with magnitude (confirming earlier results by McGuire and by Irifunac and Anderson). This result indicates that the conventional practice of scaling a constant spectral shape by peak acceleration will not give accurate answers. The Newmark and Hall method of spectral scaling, using both peak acceleration and peak velocity, largely avoids this error. Comparison of our spectra with the Nuclear Regulatory Commission's Regulatory Guide 1.60 spectrum anchored at the same value at 0.1 s shows that the Regulatory Guide 1.60 spectrum is exceeded at soil sites for a magnitude of 7.5 at all distances for periods greater than about 0.5 s. Comparison of our spectra for soil sites with the corresponding ATC-3 curve of lateral design force coefficient for the highest seismic zone indicates that the ATC-3 curve is exceeded within about 7 km of a magnitude 6.5 earthquake and within about 15 km of a magnitude 7.5 event. The amount by

  18. Fault-Zone Maturity Defines Maximum Earthquake Magnitude

    NASA Astrophysics Data System (ADS)

    Bohnhoff, M.; Bulut, F.; Stierle, E.; Ben-Zion, Y.

    2014-12-01

    Estimating the maximum likely magnitude of future earthquakes on transform faults near large metropolitan areas has fundamental consequences for the expected hazard. Here we show that the maximum earthquakes on different sections of the North Anatolian Fault Zone (NAFZ) scale with the duration of fault zone activity, cumulative offset and length of individual fault segments. The findings are based on a compiled catalogue of historical earthquakes in the region, using the extensive literary sources that exist due to the long civilization record. We find that the largest earthquakes (M~8) are exclusively observed along the well-developed part of the fault zone in the east. In contrast, the western part is still in a juvenile or transitional stage with historical earthquakes not exceeding M=7.4. This limits the current seismic hazard to NW Turkey and its largest regional population and economical center Istanbul. Our findings for the NAFZ are consistent with data from the two other major transform faults, the San Andreas fault in California and the Dead Sea Transform in the Middle East. The results indicate that maximum earthquake magnitudes generally scale with fault-zone evolution.

  19. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  20. The effects of earthquake measurement concepts and magnitude anchoring on individuals' perceptions of earthquake risk

    USGS Publications Warehouse

    Celsi, R.; Wolfinbarger, M.; Wald, D.

    2005-01-01

    The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.

  1. Estimating earthquake location and magnitude from seismic intensity data

    USGS Publications Warehouse

    Bakun, W.H.; Wentworth, C.M.

    1997-01-01

    Analysis of Modified Mercalli intensity (MMI) observations for a training set of 22 California earthquakes suggests a strategy for bounding the epicentral region and moment magnitude M from MMI observations only. We define an intensity magnitude MI that is calibrated to be equal in the mean to M. MI = mean (Mi), where Mi = (MMIi + 3.29 + 0.0206 * ??i)/1.68 and ??i is the epicentral distance (km) of observation MMIi. The epicentral region is bounded by contours of rms [MI] = rms (MI - Mi) - rms0 (MI - Mi-), where rms is the root mean square, rms0 (MI - Mi) is the minimum rms over a grid of assumed epicenters, and empirical site corrections and a distance weighting function are used. Empirical contour values for bounding the epicenter location and empirical bounds for M estimated from MI appropriate for different levels of confidence and different quantities of intensity observations are tabulated. The epicentral region bounds and MI obtained for an independent test set of western California earthquakes are consistent with the instrumental epicenters and moment magnitudes of these earthquakes. The analysis strategy is particularly appropriate for the evaluation of pre-1900 earthquakes for which the only available data are a sparse set of intensity observations.

  2. The Road to Convergence in Earthquake Frequency-Magnitude Statistics

    NASA Astrophysics Data System (ADS)

    Naylor, M.; Bell, A. F.; Main, I. G.

    2013-12-01

    The Gutenberg-Richter frequency-magnitude relation is a fundamental empirical law of seismology, but its form remains uncertain for rare extreme events. Convergence trends can be diagnostic of the nature of an underlying distribution and its sampling even before convergence has occurred. We examine the evolution of an information criteria metric applied to earthquake magnitude time series, in order to test whether the Gutenberg-Richter law can be rejecting in various earthquake catalogues. This would imply that the catalogue is starting to sample roll-off in the tail though it cannot yet identify the form of the roll-off. We compare bootstrapped synthetic Gutenberg-Richter and synthetic modified Gutenberg-Richter catalogues with the convergence trends observed in real earthquake data e.g. the global CMT catalogue, Southern California and mining/geothermal data. Whilst convergence in the tail remains some way off, we show that the temporal evolution of model likelihoods and parameters for the frequency-magnitude distribution of the global Harvard Centroid Moment Tensor catalogue is inconsistent with an unbounded GR relation, despite it being the preferred model at the current time. Bell, A. F., M. Naylor, and I. G. Main (2013), Convergence of the frequency-size distribution of global earthquakes, Geophys. Res. Lett., 40, 2585-2589, doi:10.1002/grl.50416.

  3. Radiocarbon test of earthquake magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Atwater, B.F.; Stuiver, M.; Yamaguchi, D.K.

    1991-01-01

    THE Cascadia subduction zone, which extends along the northern Pacific coast of North America, might produce earthquakes of magnitude 8 or 9 ('great' earthquakes) even though it has not done so during the past 200 years of European observation 1-7. Much of the evidence for past Cascadia earthquakes comes from former meadows and forests that became tidal mudflats owing to abrupt tectonic subsidence in the past 5,000 years2,3,6,7. If due to a great earthquake, such subsidence should have extended along more than 100 km of the coast2. Here we investigate the extent of coastal subsidence that might have been caused by a single earthquake, through high-precision radiocarbon dating of coastal trees that abruptly subsided into the intertidal zone. The ages leave the great-earthquake hypothesis intact by limiting to a few decades the discordance, if any, in the most recent subsidence of two areas 55 km apart along the Washington coast. This subsidence probably occurred about 300 years ago.

  4. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion. PMID:24458636

  5. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  6. Multifractal detrended fluctuation analysis of Pannonian earthquake magnitude series

    NASA Astrophysics Data System (ADS)

    Telesca, Luciano; Toth, Laszlo

    2016-04-01

    The multifractality of the series of magnitudes of the earthquakes occurred in Pannonia region from 2002 to 2012 has been investigated. The shallow (depth less than 40 km) and deep (depth larger than 70 km) seismic catalogues were analysed by using the multifractal detrended fluctuation analysis. The shallow and deep catalogues are characterized by different multifractal properties: (i) the magnitudes of the shallow events are weakly persistent, while those of the deep ones are almost uncorrelated; (ii) the deep catalogue is more multifractal than the shallow one; (iii) the magnitudes of the deep catalogue are characterized by a right-skewed multifractal spectrum, while that of the shallow magnitude is rather symmetric; (iv) a direct relationship between the b-value of the Gutenberg-Richter law and the multifractality of the magnitudes is suggested.

  7. Nonlinear site response in medium magnitude earthquakes near Parkfield, California

    USGS Publications Warehouse

    Rubinstein, Justin L.

    2011-01-01

    Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).

  8. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  9. Exaggerated Claims About Earthquake Predictions

    NASA Astrophysics Data System (ADS)

    Kafka, Alan L.; Ebel, John E.

    2007-01-01

    The perennial promise of successful earthquake prediction captures the imagination of a public hungry for certainty in an uncertain world. Yet, given the lack of any reliable method of predicting earthquakes [e.g., Geller, 1997; Kagan and Jackson, 1996; Evans, 1997], seismologists regularly have to explain news stories of a supposedly successful earthquake prediction when it is far from clear just how successful that prediction actually was. When journalists and public relations offices report the latest `great discovery' regarding the prediction of earthquakes, seismologists are left with the much less glamorous task of explaining to the public the gap between the claimed success and the sober reality that there is no scientifically proven method of predicting earthquakes.

  10. Automatic detection and rapid determination of earthquake magnitude by wavelet multiscale analysis of the primary arrival

    NASA Astrophysics Data System (ADS)

    Dando, B.; Simons, F. J.; Allen, R. M.

    2006-12-01

    Earthquake early warning systems save lives. It is of great importance that networked systems of seismometers be equipped with reliable tools to make rapid determinations of earthquake magnitude in the few to tens of seconds before the damaging ground motion occurs. A new fully automated algorithm based on the discrete wavelet transform detects as well as analyzes the incoming first arrival with unmatched accuracy and precision, estimating the final magnitude to within a single unit from the first few seconds of the P wave. The curious observation that such brief segments of the seismogram may contain information about the final magnitude even of very large earthquakes, which occur on faults that may rupture over tens of seconds, is central to a debate in the seismological community which we hope to stimulate but cannot attempt to address within the scope of this paper. Wavelet coefficients of the seismogram can be determined extremely rapidly and efficiently by the fast lifting wavelet transform. Extracting amplitudes at individual scales is a very simple procedure, involving a mere handful of lines of computer code. Scale-dependent thresholded amplitudes derived from the wavelet transform of the first 3--4 seconds of an incoming seismic P arrival are predictive of earthquake magnitude, with errors of one magnitude unit for seismograms recorded up to 150 km away from the earthquake source. Our procedure is a simple yet extremely efficient tool for implementation on low-power recording stations. It provides an accurate and precise method of autonomously detecting the incoming P wave and predicting the magnitude of the source from the scale-dependent character of its amplitude well before the arrival of damaging ground motion. Provided a dense array of networked seismometers exists, our procedure should become the tool of choice for earthquake early warning systems worldwide.

  11. Great earthquakes of variable magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Nelson, A.R.; Kelsey, H.M.; Witter, R.C.

    2006-01-01

    Comparison of histories of great earthquakes and accompanying tsunamis at eight coastal sites suggests plate-boundary ruptures of varying length, implying great earthquakes of variable magnitude at the Cascadia subduction zone. Inference of rupture length relies on degree of overlap on radiocarbon age ranges for earthquakes and tsunamis, and relative amounts of coseismic subsidence and heights of tsunamis. Written records of a tsunami in Japan provide the most conclusive evidence for rupture of much of the plate boundary during the earthquake of 26 January 1700. Cascadia stratigraphic evidence dating from about 1600??cal yr B.P., similar to that for the 1700 earthquake, implies a similarly long rupture with substantial subsidence and a high tsunami. Correlations are consistent with other long ruptures about 1350??cal yr B.P., 2500??cal yr B.P., 3400??cal yr B.P., 3800??cal yr B.P., 4400??cal yr B.P., and 4900??cal yr B.P. A rupture about 700-1100??cal yr B.P. was limited to the northern and central parts of the subduction zone, and a northern rupture about 2900??cal yr B.P. may have been similarly limited. Times of probable short ruptures in southern Cascadia include about 1100??cal yr B.P., 1700??cal yr B.P., 3200??cal yr B.P., 4200??cal yr B.P., 4600??cal yr B.P., and 4700??cal yr B.P. Rupture patterns suggest that the plate boundary in northern Cascadia usually breaks in long ruptures during the greatest earthquakes. Ruptures in southernmost Cascadia vary in length and recurrence intervals more than ruptures in northern Cascadia.

  12. Does low magnitude earthquake ground shaking cause landslides?

    NASA Astrophysics Data System (ADS)

    Brain, Matthew; Rosser, Nick; Vann Jones, Emma; Tunstall, Neil

    2015-04-01

    Estimating the magnitude of coseismic landslide strain accumulation at both local and regional scales is a key goal in understanding earthquake-triggered landslide distributions and landscape evolution, and in undertaking seismic risk assessment. Research in this field has primarily been carried out using the 'Newmark sliding block method' to model landslide behaviour; downslope movement of the landslide mass occurs when seismic ground accelerations are sufficient to overcome shear resistance at the landslide shear surface. The Newmark method has the advantage of simplicity, requiring only limited information on material strength properties, landslide geometry and coseismic ground motion. However, the underlying conceptual model assumes that shear strength characteristics (friction angle and cohesion) calculated using conventional strain-controlled monotonic shear tests are valid under dynamic conditions, and that values describing shear strength do not change as landslide shear strain accumulates. Recent experimental work has begun to question these assumptions, highlighting, for example, the importance of shear strain rate and changes in shear strength properties following seismic loading. However, such studies typically focus on a single earthquake event that is of sufficient magnitude to cause permanent strain accumulation; by doing so, they do not consider the potential effects that multiple low-magnitude ground shaking events can have on material strength. Since such events are more common in nature relative to high-magnitude shaking events, it is important to constrain their geomorphic effectiveness. Using an experimental laboratory approach, we present results that address this key question. We used a bespoke geotechnical testing apparatus, the Dynamic Back-Pressured Shear Box (DynBPS), that uniquely permits more realistic simulation of earthquake ground-shaking conditions within a hillslope. We tested both cohesive and granular materials, both of which

  13. Predicting Predictable: Accuracy and Reliability of Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2014-12-01

    Earthquake forecast/prediction is an uncertain profession. The famous Gutenberg-Richter relationship limits magnitude range of prediction to about one unit. Otherwise, the statistics of outcomes would be related to the smallest earthquakes and may be misleading when attributed to the largest earthquakes. Moreover, the intrinsic uncertainty of earthquake sizing allows self-deceptive picking of justification "just from below" the targeted magnitude range. This might be important encouraging evidence but, by no means, can be a "helpful" additive to statistics of a rigid testing that determines reliability and efficiency of a farecast/prediction method. Usually, earthquake prediction is classified in respect to expectation time while overlooking term-less identification of earthquake prone areas, as well as spatial accuracy. The forecasts are often made for a "cell" or "seismic region" whose area is not linked to the size of target earthquakes. This might be another source for making a wrong choice in parameterization of an forecast/prediction method and, eventually, for unsatisfactory performance in a real-time application. Summing up, prediction of time and location of an earthquake of a certain magnitude range can be classified into categories listed in the Table below - Classification of earthquake prediction accuracy Temporal, in years Spatial, in source zone size (L) Long-term 10 Long-range Up to 100 Intermediate-term 1 Middle-range 5-10 Short-term 0.01-0.1 Narrow-range 2-3 Immediate 0.001 Exact 1 Note that a wide variety of possible combinations that exist is much larger than usually considered "short-term exact" one. In principle, such an accurate statement about anticipated seismic extreme might be futile due to the complexities of the Earth's lithosphere, its blocks-and-faults structure, and evidently nonlinear dynamics of the seismic process. The observed scaling of source size and preparation zone with earthquake magnitude implies exponential scales for

  14. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  15. VLF study of low magnitude Earthquakes (4.5

    NASA Astrophysics Data System (ADS)

    Wolbang, Daniel; Biernat, Helfried; Schwingenschuh, Konrad; Eichelberger, Hans; Prattes, Gustav; Besser, Bruno; Boudjada, Mohammed; Rozhnoi, Alexander; Solovieva, Maria; Biagi, Pier Francesco; Friedrich, Martin

    2014-05-01

    In the course of the European VLF/LF radio receiver network (International Network for Frontier Research on Earthquake Precursors, INFREP), radio signals in the frequency range from 10-50 kilohertz are received, continuously recorded (temporal resolution 20 seconds) and analyzed in the Graz/Austria knot. The radio signals are generated by dedicated distributed transmitters and detected by INFREP receivers in Europe. In case the signal is crossing an earthquake preparation zone, we are in principle able to detect seismic activity if the signal to noise ratio is high enough. The requirements to detect a seismic event with the radio link methods are given by the magnitude M of the Earthquake (EQ), the EQ preparation zone and the Fresnel zone. As pointed out by Rozhnoi et al. (2009), the VLF methods are suitable for earthquakes M>5.0. Furthermore, the VLF/LF radio link gets only disturbed if it is crossing the EQ preparation zone which is described by Molchanov et al. (2008). In the frame of this project I analyze low seismicity EQs (M≤5.6) in south/eastern Europe in the time period 2011-2013. My emphasis is on two seismic events with magnitudes 5.6 and 4.8 which we are not able to adequately characterize using our single parameter VLF method. I perform a fine structure analysis of the residua of various radio links crossing the area around the particular 2 EQs. Depending on the individual paths not all radio links are crossing the EQ preparation zone directly, so a comparative study is possible. As a comparison I analyze with the same method the already good described EQ of L'Aquila/Italy in 2009 with M=6.3 and radio links which are crossing directly the EQ preparation zone. In the course of this project we try to understand in more detail why it is so difficult to detect EQs with 4.5

  16. Sociological aspects of earthquake prediction

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Henry Spall talked recently with Denis Mileti who is in the Department of Sociology, Colorado State University, Fort Collins, Colo. Dr. Mileti is a sociologst involved with research programs that study the socioeconomic impact of earthquake prediction

  17. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods

  18. Magnitude and location of historical earthquakes in Japan and implications for the 1855 Ansei Edo earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    2005-01-01

    Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.

  19. Geochemical challenge to earthquake prediction.

    PubMed Central

    Wakita, H

    1996-01-01

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665

  20. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  1. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  2. Local magnitude determinations for intermountain seismic belt earthquakes from broadband digital data

    USGS Publications Warehouse

    Pechmann, J.C.; Nava, S.J.; Terra, F.M.; Bernier, J.C.

    2007-01-01

    The University of Utah Seismograph Stations (UUSS) earthquake catalogs for the Utah and Yellowstone National Park regions contain two types of size measurements: local magnitude (ML) and coda magnitude (MC), which is calibrated against ML. From 1962 through 1993, UUSS calculated ML values for southern and central Intermountain Seismic Belt earthquakes using maximum peak-to-peak (p-p) amplitudes on paper records from one to five Wood-Anderson (W-A) seismographs in Utah. For ML determinations of earthquakes since 1994, UUSS has utilized synthetic W-A seismograms from U.S. National Seismic Network and UUSS broadband digital telemetry stations in the region, which numbered 23 by the end of our study period on 30 June 2002. This change has greatly increased the percentage of earthquakes for which ML can be determined. It is now possible to determine ML for all M ???3 earthquakes in the Utah and Yellowstone regions and earthquakes as small as M <1 in some areas. To maintain continuity in the magnitudes in the UUSS earthquake catalogs, we determined empirical ML station corrections that minimize differences between MLs calculated from paper and synthetic W-A records. Application of these station corrections, in combination with distance corrections from Richter (1958) which have been in use at UUSS since 1962, produces ML values that do not show any significant distance dependence. ML determinations for the Utah and Yellowstone regions for 1981-2002 using our station corrections and Richter's distance corrections have provided a reliable data set for recalibrating the MC scales for these regions. Our revised ML values are consistent with available moment magnitude determinations for Intermountain Seismic Belt earthquakes. To facilitate automatic ML measurements, we analyzed the distribution of the times of maximum p-p amplitudes in synthetic W-A records. A 30-sec time window for maximum amplitudes, beginning 5 sec before the predicted Sg time, encompasses 95% of the

  3. Probability of a given-magnitude earthquake induced by a fluid injection

    NASA Astrophysics Data System (ADS)

    Shapiro, S. A.; Dinske, C.; Kummerow, J.

    2007-11-01

    Fluid injections in geothermic and hydrocarbon reservoirs induce small earthquakes (-3 < M < 2). Occasionally, however, earthquakes with larger magnitudes (M ~ 4) occur. We investigate magnitude distributions and show that for a constant injection pressure the probability to induce an earthquake with a magnitude larger than a given value increases with injection time corresponding to a bi-logarithmical law with a proportionality coefficient close to one. We find that the process of pressure diffusion in a poroelastic medium with randomly distributed sub-critical cracks obeying a Gutenberg-Richter relation well explains our observations. The magnitude distribution is mainly inherited from the statistics of pre-existing fracture systems. The number of earthquakes greater than a given magnitude also increases with the strength of the injection source and the tectonic activity of the injection site. Our formulation provides a way to estimate expected magnitudes of induced earthquakes. It can be used to avoid significant earthquakes by correspondingly planning fluid injections.

  4. The politics of earthquake prediction

    SciTech Connect

    Olson, R.S.

    1989-01-01

    This book gives an account of the politics, scientific and public, generated from the Brady-Spence prediction of a massive earthquake to take place within several years in central Peru. Though the disaster did not happen, this examination of the events serves to highlight American scientific processes and the results of scientific interaction with the media and political bureaucracy.

  5. Predictability of repeating earthquakes near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Nadeau, Robert M.

    2012-07-01

    We analyse sequences of repeating microearthquakes that were identified by applying waveform coherency methods to data from the Parkfield High-Resolution Seismic Network. Because by definition all events in a sequence have similar magnitudes and locations, the temporal behaviour of these sequences is naturally isolated, which, coupled with the high occurrence rates of small events, makes these data ideal for studying interevent time distributions. To characterize the temporal predictability of these sequences, we perform retrospective forecast experiments using hundreds of earthquakes. We apply three variants of a simple algorithm that produces sequence-specific, time-varying hazard functions, and we find that the sequences are predictable. We discuss limitations of these data and, more generally, challenges in identifying repeating events, and we outline the potential implications of our results for understanding the occurrence of large earthquakes.

  6. Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock

    NASA Astrophysics Data System (ADS)

    Shcherbakov, R.

    2014-12-01

    Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and

  7. Modeling Time Dependent Earthquake Magnitude Distributions Associated with Injection-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Maurer, J.; Segall, P.

    2015-12-01

    Understanding and predicting earthquake magnitudes from injection-induced seismicity is critically important for estimating hazard due to injection operations. A particular problem has been that the largest event often occurs post shut-in. A rigorous analysis would require modeling all stages of earthquake nucleation, propagation, and arrest, and not just initiation. We present a simple conceptual model for predicting the distribution of earthquake magnitudes during and following injection, building on the analysis of Segall & Lu (2015). The analysis requires several assumptions: (1) the distribution of source dimensions follows a Gutenberg-Richter distribution; (2) in environments where the background ratio of shear to effective normal stress is low, the size of induced events is limited by the volume perturbed by injection (e.g., Shapiro et al., 2013; McGarr, 2014), and (3) the perturbed volume can be approximated by diffusion in a homogeneous medium. Evidence for the second assumption comes from numerical studies that indicate the background ratio of shear to normal stress controls how far an earthquake rupture, once initiated, can grow (Dunham et al., 2011; Schmitt et al., submitted). We derive analytical expressions that give the rate of events of a given magnitude as the product of three terms: the time-dependent rate of nucleations, the probability of nucleating on a source of given size (from the Gutenberg-Richter distribution), and a time-dependent geometrical factor. We verify our results using simulations and demonstrate characteristics observed in real induced sequences, such as time-dependent b-values and the occurrence of the largest event post injection. We compare results to Segall & Lu (2015) as well as example datasets. Future work includes using 2D numerical simulations to test our results and assumptions; in particular, investigating how background shear stress and fault roughness control rupture extent.

  8. Earthquakes clustering based on the magnitude and the depths in Molluca Province

    SciTech Connect

    Wattimanela, H. J.; Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T.

    2015-12-22

    In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province.

  9. Earthquake prediction: Simple methods for complex phenomena

    NASA Astrophysics Data System (ADS)

    Luen, Bradley

    2010-09-01

    Earthquake predictions are often either based on stochastic models, or tested using stochastic models. Tests of predictions often tacitly assume predictions do not depend on past seismicity, which is false. We construct a naive predictor that, following each large earthquake, predicts another large earthquake will occur nearby soon. Because this "automatic alarm" strategy exploits clustering, it succeeds beyond "chance" according to a test that holds the predictions _xed. Some researchers try to remove clustering from earthquake catalogs and model the remaining events. There have been claims that the declustered catalogs are Poisson on the basis of statistical tests we show to be weak. Better tests show that declustered catalogs are not Poisson. In fact, there is evidence that events in declustered catalogs do not have exchangeable times given the locations, a necessary condition for the Poisson. If seismicity followed a stochastic process, an optimal predictor would turn on an alarm when the conditional intensity is high. The Epidemic-Type Aftershock (ETAS) model is a popular point process model that includes clustering. It has many parameters, but is still a simpli_cation of seismicity. Estimating the model is di_cult, and estimated parameters often give a non-stationary model. Even if the model is ETAS, temporal predictions based on the ETAS conditional intensity are not much better than those of magnitude-dependent automatic (MDA) alarms, a much simpler strategy with only one parameter instead of _ve. For a catalog of Southern Californian seismicity, ETAS predictions again o_er only slight improvement over MDA alarms

  10. Microearthquake networks and earthquake prediction

    USGS Publications Warehouse

    Lee, W.H.K.; Steward, S. W.

    1979-01-01

    A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.  

  11. Earthquake prediction with electromagnetic phenomena

    NASA Astrophysics Data System (ADS)

    Hayakawa, Masashi

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  12. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2011-12-01

    We develop an intensity-attenuation relation for the central and eastern United States (CEUS) and estimate the magnitudes of the 1811-1812 New Madrid, MO and 1886 Charleston, SC earthquakes. This relation incorporates an unprecedented number of intensity observations, uses a simple but sufficient form, and minimizes residuals of predicted and observed log epicentral distance rather than maximizing the likelihood of an observed intensity. We constrain the relation with the modified Mercalli intensity dataset of the National Oceanic and Atmospheric Administration along with the 'Did You Feel It?' dataset of the U.S. Geological Survey through April, 2011. We find that the new relation leads to lower magnitude estimates for the New Madrid earthquakes than many previous studies. Depending on the modified Mercalli intensity dataset used, the new relation results in estimates for the moment magnitudes of the December 16th, 1811, January 23rd, 1812, and February 7th, 1812 mainshocks and December 16th dawn aftershock of 6.6-6.9, 6.6-7.0, 6.9-7.3, and 6.4-6.8, respectively, with a magnitude uncertainty of ±0.4. We also estimate a magnitude of 6.7±0.3 for the 1886 Charleston, SC earthquake. We find a greater range of epistemic uncertainty when also accounting for multiple intensity-attenuation relations. The magnitude ranges for the December 16th, January 23rd, and February 7th mainshocks and December 16th dawn aftershock are 6.6-7.8, 6.6-7.6, 6.9-8.1, and 6.4-7.2, respectively. Relative to the 2008 national seismic hazard maps, our estimate of epistemic uncertainty increases the coefficient of variation of seismic hazard estimates by 46-60 percent for ground motions expected to be exceeded with a 2-percent probability in 50 years and by 39-48 percent for ground motions expected to be exceeded with a 10-percent probability in 50 years. The reason for the large epistemic uncertainty is due to the lack of large instrumental CEUS earthquakes, which are needed to determine the

  13. Dim prospects for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  14. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  15. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  16. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental

  17. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  18. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  19. The Magnitude 6.7 Northridge, California, Earthquake of January 17, 1994

    NASA Technical Reports Server (NTRS)

    Donnellan, A.

    1994-01-01

    The most damaging earthquake in the United States since 1906 struck northern Los Angeles on January 17.1994. The magnitude 6.7 Northridge earthquake produced a maximum of more than 3 meters of reverse (up-dip) slip on a south-dipping thrust fault rooted under the San Fernando Valley and projecting north under the Santa Susana Mountains.

  20. Chile2015: Lévy Flight and Long-Range Correlation Analysis of Earthquake Magnitudes in Chile

    NASA Astrophysics Data System (ADS)

    Beccar-Varela, Maria P.; Gonzalez-Huizar, Hector; Mariani, Maria C.; Serpa, Laura F.; Tweneboah, Osei K.

    2016-07-01

    The stochastic Truncated Lévy Flight model and detrended fluctuation analysis (DFA) are used to investigate the temporal distribution of earthquake magnitudes in Chile. We show that Lévy Flight is appropriated for modeling the time series of the magnitudes of the earthquakes. Furthermore, DFA shows that these events present memory effects, suggesting that the magnitude of impeding earthquakes depends on the magnitude of previous earthquakes. Based on this dependency, we use a non-linear regression to estimate the magnitude of the 2015, M8.3 Illapel earthquake based on the magnitudes of the previous events.

  1. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  2. The ethics of earthquake prediction.

    PubMed

    Sol, Ayhan; Turan, Halil

    2004-10-01

    Scientists' responsibility to inform the public about their results may conflict with their responsibility not to cause social disturbance by the communication of these results. A study of the well-known Brady-Spence and Iben Browning earthquake predictions illustrates this conflict in the publication of scientifically unwarranted predictions. Furthermore, a public policy that considers public sensitivity caused by such publications as an opportunity to promote public awareness is ethically problematic from (i) a refined consequentialist point of view that any means cannot be justified by any ends, and (ii) a rights view according to which individuals should never be treated as a mere means to ends. The Parkfield experiment, the so-called paradigm case of cooperation between natural and social scientists and the political authorities in hazard management and risk communication, is also open to similar ethical criticism. For the people in the Parkfield area were not informed that the whole experiment was based on a contested seismological paradigm.

  3. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  4. Maximum Magnitude and Recurrence Interval for the Large Earthquakes in the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Hu, C.

    2012-12-01

    Maximum magnitude and recurrence interval of the large earthquakes are key parameters for seismic hazard assessment in the central and eastern United States. Determination of these two parameters is quite difficult in the region, however. For example, the estimated maximum magnitudes of the 1811-12 New Madrid sequence are in the range of M6.6 to M8.2, whereas the estimated recurrence intervals are in the range of about 500 to several thousand years. These large variations of maximum magnitude and recurrence interval for the large earthquakes lead to significant variation of estimated seismic hazards in the central and eastern United States. There are several approaches being used to estimate the magnitudes and recurrence intervals, such as historical intensity analysis, geodetic data analysis, and paleo-seismic investigation. We will discuss the approaches that are currently being used to estimate maximum magnitude and recurrence interval of the large earthquakes in the central United States.

  5. Maximum earthquake magnitudes along different sections of the North Anatolian fault zone

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda

    2016-04-01

    Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.

  6. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  7. Object file continuity predicts attentional blink magnitude.

    PubMed

    Kellie, Frances J; Shapiro, Kimron L

    2004-05-01

    When asked to identify targets embedded within a rapid consecutive stream of visual stimuli, observers are less able to identify the second target (T2) when it is presented within half a second of the first (T1); this deficit has been termed the attentional blink (AB). Rapid serial visual presentation methodology was used to investigate the relationship between the AB and object files (episodic representations implicated in object identification and perceptual constancy). An inverse linear relationship was found between the degree of object file continuity and AB magnitude. An important locus of object file continuity was the intervening stream items between T1 and T2. The results are discussed in terms of the heuristic of the object file to preserve limited attentional capacity.

  8. The magnitude 6.7 Northridge, California, earthquake of 17 January 1994

    USGS Publications Warehouse

    Jones, L.; Aki, K.; Boore, D.; Celebi, M.; Donnellan, A.; Hall, J.; Harris, R.; Hauksson, E.; Heaton, T.; Hough, S.; Hudnut, K.; Hutton, K.; Johnston, M.; Joyner, W.; Kanamori, H.; Marshall, G.; Michael, A.; Mori, J.; Murray, M.; Ponti, D.; Reasenberg, P.; Schwartz, D.; Seeber, L.; Shakal, A.; Simpson, R.; Thio, H.; Tinsley, J.; Todorovska, M.; Trifunac, M.; Wald, D.; Zoback, M.L.

    1994-01-01

    The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.

  9. Earthquake Probabilities and Magnitude Distribution in 100a along the Haiyuan Fault, northwestern China

    NASA Astrophysics Data System (ADS)

    Ran, H.

    2004-12-01

    The Haiyuan fault is a major seismogenic fault in north-central China. One of the most devastating great earthquake in 20th century occurred near Haiyuan in northwestern China in December 16, 1920. More than 220000 people were killed and thousands of towns and villages were destroyed during this devastating earthquake. A 230 km long left-lateral surface rupture zone formed along the Haiyuan fault during the earthquake with maximum left lateral displacement of 10 m. In recent years, some researchers have studied the paleoseismology along the Haiyuan fault and revealed a lot of paleoearthquake events. All available information allows more reliable analysis of earthquake recurrence interval and earthquake rupture patterns along the Haiyuan fault. Based on fault geometry, segmentation pattern, and paleoearthquake events along the Haiyuan fault we can identify three scales of earthquake rupture: rupture of one segment, cascade rupture of two segments, and cascade rupture of entire fault (three segments), and obtain the earthquake recurrence intervals of these scales of earthquake rupture. The earthquake probability and magnitude distribution in 100-year along the Haiyuan fault can be obtained through weighted computation, by applying these paleoseismological information mentioned above, using Possion and Brownian passage time model and considering different rupture patterns. The result shows that the earthquakes probability of is about 0.035 in 100-year along the Haiyuan fault.

  10. Coseismic and postseismic slip of the 2011 magnitude-9 Tohoku-Oki earthquake.

    PubMed

    Ozawa, Shinzaburo; Nishimura, Takuya; Suito, Hisashi; Kobayashi, Tomokazu; Tobita, Mikio; Imakiire, Tetsuro

    2011-06-15

    Most large earthquakes occur along an oceanic trench, where an oceanic plate subducts beneath a continental plate. Massive earthquakes with a moment magnitude, M(w), of nine have been known to occur in only a few areas, including Chile, Alaska, Kamchatka and Sumatra. No historical records exist of a M(w) = 9 earthquake along the Japan trench, where the Pacific plate subducts beneath the Okhotsk plate, with the possible exception of the ad 869 Jogan earthquake, the magnitude of which has not been well constrained. However, the strain accumulation rate estimated there from recent geodetic observations is much higher than the average strain rate released in previous interplate earthquakes. This finding raises the question of how such areas release the accumulated strain. A megathrust earthquake with M(w) = 9.0 (hereafter referred to as the Tohoku-Oki earthquake) occurred on 11 March 2011, rupturing the plate boundary off the Pacific coast of northeastern Japan. Here we report the distributions of the coseismic slip and postseismic slip as determined from ground displacement detected using a network based on the Global Positioning System. The coseismic slip area extends approximately 400 km along the Japan trench, matching the area of the pre-seismic locked zone. The afterslip has begun to overlap the coseismic slip area and extends into the surrounding region. In particular, the afterslip area reached a depth of approximately 100 km, with M(w) = 8.3, on 25 March 2011. Because the Tohoku-Oki earthquake released the strain accumulated for several hundred years, the paradox of the strain budget imbalance may be partly resolved. This earthquake reminds us of the potential for M(w) ≈ 9 earthquakes to occur along other trench systems, even if no past evidence of such events exists. Therefore, it is imperative that strain accumulation be monitored using a space geodetic technique to assess earthquake potential.

  11. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  12. The Magnitude Frequency Distribution of Induced Earthquakes and Its Implications for Crustal Heterogeneity and Hazard

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.

    2015-12-01

    Earthquake activity in the central United States has increased dramatically since 2009, principally driven by injection of wastewater coproduced with oil and gas. The elevation of pore pressure from the collective influence of many disposal wells has created an unintended experiment that probes both the state of stress and architecture of the fluid plumbing and fault systems through the earthquakes it induces. These earthquakes primarily release tectonic stress rather than accommodation stresses from injection. Results to date suggest that the aggregated magnitude-frequency distribution (MFD) of these earthquakes differs from natural tectonic earthquakes in the same region for which the b-value is ~1.0. In Kansas, Oklahoma and Texas alone, more than 1100 earthquakes Mw ≥3 occurred between January 2014 and June 2015 but only 32 were Mw ≥ 4 and none were as large as Mw 5. Why is this so? Either the b-value is high (> 1.5) or the magnitude-frequency distribution (MFD) deviates from log-linear form at large magnitude. Where catalogs from local networks are available, such as in southern Kansas, b-values are normal (~1.0) for small magnitude events (M < 3). The deficit in larger-magnitude events could be an artifact of a short observation period, or could reflect a decreased potential for large earthquakes. According to the prevailing paradigm, injection will induce an earthquake when (1) the pressure change encounters a preexisting fault favorably oriented in the tectonic stress field; and (2) the pore-pressure perturbation at the hypocenter is sufficient to overcome the frictional strength of the fault. Most induced earthquakes occur where the injection pressure has attenuated to a small fraction of the seismic stress drop implying that the nucleation point was highly stressed. The population statistics of faults satisfying (1) could be the cause of this MFD if there are many small faults (dimension < 1 km) and few large ones in a critically stressed crust

  13. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  14. Model parameter estimation bias induced by earthquake magnitude cut-off

    NASA Astrophysics Data System (ADS)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  15. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  16. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  17. Paleo-earthquakes of diverse magnitude recorded at the Salt Lake site, the Haiyuan Fault, China

    NASA Astrophysics Data System (ADS)

    Liu, J.; Shao, Y.; Klinger, Y.; Xie, K.; Yuan, D.; Lei, Z.

    2013-12-01

    Paleoseismology provides routinely fundamental data for earthquake recurrence models, by revealing past ground-breaking events that stopped at different levels in layered soft sediments. Paleo-earthquakes recognized in trenches are often unknown in size, vaguely defined as surface-breaking events, but often explicitly or implicitly assumed to be similar in size when calculating earthquake recurrence interval in seismic hazard assessment of the studied fault. Here, we show data that challenge this basic underlying premise. At the Salt Lake site on the active left-lateral Haiyuan fault, northeastern Tibetan plateau, a sequence of remarkably high-resolution stratigraphy recorded at least four events since 1500 A.D., constrained by AMS C14 dating. A comparison with regional historical earthquake accounts shows that they are a mix of events of disparaging magnitudes. Except the most recent earthquake of M~8 in 1920 A.D., three earlier events, occurred in 1760 A.D., 1638 A.D., 1597 A.D. respectively, are smaller in magnitude, M~6 to M~7. Our results thus show that events order of magnitude difference in rupture length and seismic moment can be recorded at a single site, contrary to conventional definition of paleoseimic recurrence interval, which assumes simple large characteristic magnitude for recurring events.

  18. How to assess magnitudes of paleo-earthquakes from multiple observations

    NASA Astrophysics Data System (ADS)

    Hintersberger, Esther; Decker, Kurt

    2016-04-01

    An important aspect of fault characterisation regarding seismic hazard assessment are paleo-earthquake magnitudes. Especially in regions with low or moderate seismicity, paleo-magnitudes are normally much larger than those of historical earthquakes and therefore provide essential information about seismic potential and expected maximum magnitudes of a certain region. In general, these paleo-earthquake magnitudes are based either on surface rupture length or on surface displacement observed at trenching sites. Several well-established correlations provide the possibility to link the observed surface displacement to a certain magnitude. However, the combination of more than one observation is still rare and not well established. We present here a method based on a probabilistic approach proposed by Biasi and Weldon (2006) to combine several observations to better constrain the possible magnitude range of a paleo-earthquake. Extrapolating the approach of Biasi and Weldon (2006), the single-observation probability density functions (PDF) are assumed to be independent of each other. Following this line, the common PDF for all observed surface displacements generated by one earthquake is the product of all single-displacement PDFs. In order to test our method, we use surface displacement data for modern earthquakes, where magnitudes have been determined by instrumental records. For randomly selected "observations", we calculated the associated PDFs for each "observation point". We then combined the PDFs into one common PDF for an increasing number of "observations". Plotting the most probable magnitudes against the number of combined "observations", the resultant range of most probable magnitudes is very close to the magnitude derived by instrumental methods. Testing our method with real trenching observations, we used the results of a paleoseismological investigation within the Vienna Pull-Apart Basin (Austria), where three trenches were opened along the normal

  19. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not

  20. Automatic detection and rapid determination of earthquake magnitude by wavelet multiscale analysis of the primary arrival

    NASA Astrophysics Data System (ADS)

    Simons, Frederik J.; Dando, Ben D. E.; Allen, Richard M.

    2006-10-01

    Earthquake early warning systems must save lives. It is of great importance that networked systems of seismometers be equipped with reliable tools to make rapid determinations of earthquake magnitude in the few to tens of seconds before the damaging ground motion occurs. A new fully automated algorithm based on the discrete wavelet transform detects as well as analyzes the incoming first arrival with great accuracy and precision, estimating the final magnitude to within a single unit from the first few seconds of the P wave.

  1. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  2. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)

    NASA Astrophysics Data System (ADS)

    Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-06-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding

  3. Intensity, magnitude, location and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, Walter; Hough, Susan; Martin, Stacey; Bilham, Roger

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earthquakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental- with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earthquakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  4. Intensity, magnitude, location, and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, W.; Hough, S.; Martin, S.; Bilham, R.

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earth-quakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental-with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earth-quakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  5. Locations and magnitudes of earthquakes in Central Asia from seismic intensity data

    NASA Astrophysics Data System (ADS)

    Bindi, D.; Parolai, S.; Gómez-Capera, A.; Locati, M.; Kalmetyeva, Z.; Mikhailova, N.

    2014-01-01

    We apply the Bakun and Wentworth ( Bull Seism Soc Am 87:1502-1521, 1997) method to determine the location and magnitude of earthquakes occurred in Central Asia using MSK-64 intensity assignments. The attenuation model previously derived and validated by Bindi et al. ( Geophys J Int, 2013) is used to analyse 21 earthquakes that occurred over the period 1885-1964, and the estimated locations and magnitudes are compared to values available in literature. Bootstrap analyses are performed to estimate the confidence intervals of the intensity magnitudes, as well as to quantify the location uncertainty. The analyses of seven significant earthquakes for the hazard assessment are presented in detail, including three large historical earthquakes that struck the northern Tien-Shan between the end of the nineteenth and the beginning of the twentieth centuries: the 1887, M 7.3 Verny, the 1889, M 8.3 Chilik and the 1911, M 8.2 Kemin earthquakes. Regarding the 1911, Kemin earthquake the magnitude values estimated from intensity data are lower (i.e. MILH = 7.8 and MIW = 7.6 considering surface wave and moment magnitude, respectively) than the value M = 8.2 listed in the considered catalog. These values are more in agreement with the value M S = 7.8 revised by Abe and Noguchi ( Phys Earth Planet In, 33:1-11, 1983b) for the surface wave magnitude. For the Kemin earthquake, the distribution of the bootstrap solutions for the intensity centre reveal two minima, indicating that the distribution of intensity assignments do not constrain a unique solution. This is in agreement with the complex source rupture history of the Kemin earthquake, which involved several fault segments with different strike orientations, dipping angles and focal mechanisms (e.g. Delvaux et al. in Russ Geol Geophys 42:1167-1177, 2001; Arrowsmith et al. in Eos Trans Am Geophys Union 86(52), 2005). Two possible locations for the intensity centre are obtained. The first is located on the easternmost sub-faults (i

  6. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  7. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  8. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  9. Depth dependence of earthquake frequency-magnitude distributions in California: Implications for rupture initiation

    USGS Publications Warehouse

    Mori, J.; Abercrombie, R.E.

    1997-01-01

    Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.

  10. Fault-Zone Maturity Defines Maximum Earthquake Magnitude: The case of the North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Bulut, Fatih; Stierle, Eva; Martinez-Garzon, Patricia; Benzion, Yehuda

    2015-04-01

    Estimating the maximum likely magnitude of future earthquakes on transform faults near large metropolitan areas has fundamental consequences for the expected hazard. Here we show that the maximum earthquakes on different sections of the North Anatolian Fault Zone (NAFZ) scale with the duration of fault zone activity, cumulative offset and length of individual fault segments. The findings are based on a compiled catalogue of historical earthquakes in the region, using the extensive literary sources that exist due to the long civilization record. We find that the largest earthquakes (M~8) are exclusively observed along the well-developed part of the fault zone in the east. In contrast, the western part is still in a juvenile or transitional stage with historical earthquakes not exceeding M=7.4. This limits the current seismic hazard to NW Turkey and its largest regional population and economical center Istanbul. Our findings for the NAFZ are consistent with data from the two other major transform faults, the San Andreas fault in California and the Dead Sea Transform in the Middle East. The results indicate that maximum earthquake magnitudes generally scale with fault-zone evolution.

  11. Earthquake source inversion of tsunami runup prediction

    NASA Astrophysics Data System (ADS)

    Sekar, Anusha

    Our goal is to study two inverse problems: using seismic data to invert for earthquake parameters and using tide gauge data to invert for earthquake parameters. We focus on the feasibility of using a combination of these inverse problems to improve tsunami runup prediction. A considerable part of the thesis is devoted to studying the seismic forward operator and its modeling using immersed interface methods. We develop an immersed interface method for solving the variable coefficient advection equation in one dimension with a propagating singularity and prove a convergence result for this method. We also prove a convergence result for the one-dimensional acoustic system of partial differential equations solved using immersed interface methods with internal boundary conditions. Such systems form the building blocks of the numerical model for the earthquake. For a simple earthquake-tsunami model, we observe a variety of possibilities in the recovery of the earthquake parameters and tsunami runup prediction. In some cases the data are insufficient either to invert for the earthquake parameters or to predict the runup. When more data are added, we are able to resolve the earthquake parameters with enough accuracy to predict the runup. We expect that this variety will be true in a real world three dimensional geometry as well.

  12. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake.

    PubMed

    Hill, D P; Reasenberg, P A; Michael, A; Arabaz, W J; Beroza, G; Brumbaugh, D; Brune, J N; Castro, R; Davis, S; Depolo, D; Ellsworth, W L; Gomberg, J; Harmsen, S; House, L; Jackson, S M; Johnston, M J; Jones, L; Keller, R; Malone, S; Munguia, L; Nava, S; Pechmann, J C; Sanford, A; Simpson, R W; Smith, R B; Stark, M; Stickney, M; Vidal, A; Walter, S; Wong, V; Zollweg, J

    1993-06-11

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma). PMID:17810202

  13. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake

    USGS Publications Warehouse

    Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.

    1993-01-01

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).

  14. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  15. Reconstructing the magnitude for Earth's greatest earthquakes with microfossil measures of sudden coastal subsidence

    NASA Astrophysics Data System (ADS)

    Engelhart, S. E.; Horton, B. P.; Nelson, A. R.; Wang, K.; Wang, P.; Witter, R. C.; Hawkes, A.

    2012-12-01

    Tidal marsh sediments in estuaries along the Cascadia coast archive stratigraphic evidence of Holocene great earthquakes (magnitude 8-9) that record abrupt relative sea-level (RSL) changes. Quantitative microfossil-based RSL reconstructions produce precise estimates of sudden coastal subsidence or uplift during great earthquakes because of the strong relationship between species distributions and elevation within the intertidal zone. We have developed a regional foraminiferal-based transfer function that is validated against simulated coseismic subsidence from a marsh transplant experiment, demonstrating accuracy to within 5 cm. Two case studies demonstrate the utility of high-precision microfossil-based RSL reconstructions at the Cascadia subduction zone. One approach in early Cascadia paleoseismic research was to describe the stratigraphic evidence of the great AD 1700 earthquake and then assume that earlier earthquakes were of similar magnitude. All but the most recent (transfer function) estimates of the amount of coseismic subsidence at Cascadia are too imprecise (errors of >±0.5 m) to distinguish, for example, coseismic from postseismic land-level movements, or to infer differences in amounts of subsidence or uplift from one earthquake cycle to the next. Reconstructions of RSL rise from stratigraphic records at multiple locations for the four most recent earthquake cycles show variability in the amount of coseismic subsidence. The penultimate earthquake at Siletz Bay around 800 to 900 years ago produced one-third of the coseismic subsidence produced in AD 1700. Most earthquake rupture models used a uniform-slip distribution along the megathrust to explain poorly constrained paleoseismic estimates of coastal subsidence during the AD 1700 Cascadia earthquake. Here, we test models of heterogeneous slip for the AD 1700 Cascadia earthquake that are similar to slip distribution inferred for instrumentally recorded great subduction earthquakes worldwide. We use

  16. Foreshocks Are Not Predictive of Future Earthquake Size

    NASA Astrophysics Data System (ADS)

    Page, M. T.; Felzer, K. R.; Michael, A. J.

    2014-12-01

    The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. A number of predictions can be made from the cascade model, including the fraction of earthquakes that are foreshocks to larger events, the distribution of differences between foreshock and mainshock magnitudes, and the distribution of time lags between foreshocks and mainshocks. The last should follow the inverse Omori law, which will cause the appearance of an accelerating seismicity rate if multiple foreshock sequences are stacked (Helmstetter and Sornette, 2003). All of these predictions are consistent with observations (Helmstetter and Sornette, 2003; Felzer et al. 2004). If foreshocks were to scale with mainshock size, this would be strong evidence against the cascade model. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results. In particular, we find that their catalogs are not complete to the level assumed, and their ETAS model underestimates inverse Omori behavior. To conclude, seismicity data to date is consistent with the hypothesis that the nucleation process is the same for earthquakes of all sizes.

  17. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  18. Magnitude Problems in Historical Earthquake Catalogs and Their Impact on Seismic Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Mahdyiar, M.; Shen-Tu, B.; Shabestari, K.; Guin, J.

    2010-12-01

    A reliable historical earthquake catalog is a critical component for any regional seismic hazard analysis. In Europe, a number of historical earthquake catalogs have been compiled and used in constructing national or regional seismic hazard maps, for instance, Switzerland ECOS catalog by Swiss Seismological Service (2002), Italy CPTI catalog by CPTI Working Group (2004), Greece catalog by Papazachos et al. (2007), and CENEC (central, northern and northwestern Europe) catalog by Grünthal et al. (2009), Turkey catalog by Kalafat et al. (2007), and GSHAP catalog by Global Seismic Hazard Assessment Program (1999). These catalogs spatially overlap with each other to a large extent and employed a uniform magnitude scale (Mw). A careful review of these catalogs has revealed significant magnitude problems which can substantially impact regional seismic hazard assessment: 1) Magnitudes for the same earthquakes in different catalogs are discrepant. Such discrepancies are mainly driven by different regression relationships used to convert other magnitude scales or intensity into Mw. One of the consequences is magnitudes of many events in one catalog are systematically biased higher or lower with respect to those in another catalog. For example, the magnitudes of large historical earthquakes in the Italy CPTI catalog are systematically higher than those in Switzerland ECOS catalog. 2) Abnormally high frequency of large magnitude events is observed for some time period that intensities are the main available data. This phenomenon is observed in Italy CPTI catalog for the time period of 1870 to 1930. This may be due to biased conversion from intensity to magnitude. 3) A systematic bias in magnitude resulted in biased estimations for a- and b-values of the Gutenberg-Richter magnitude frequency relationships. It also affected the determination of upper bound magnitudes for various seismic source zones. All of these issues can lead to skewed seismic hazard results, or inconsistent

  19. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed Central

    Tullis, T E

    1996-01-01

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668

  20. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  1. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  2. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  3. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    NASA Astrophysics Data System (ADS)

    Noda, Shunta; Ellsworth, William L.

    2016-09-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  4. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  5. Seismomagnetic observation during the 8 July 1986 magnitude 5.9 North Palm Springs earthquake

    USGS Publications Warehouse

    Johnston, M.J.S.; Mueller, R.J.

    1987-01-01

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observations are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.

  6. HYPOELLIPSE; a computer program for determining local earthquake hypocentral parameters, magnitude, and first-motion pattern

    USGS Publications Warehouse

    Lahr, John C.

    1999-01-01

    This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.

  7. Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area

    NASA Astrophysics Data System (ADS)

    Alarifi, A. S.; Alarifi, N. S.

    2009-12-01

    Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis

  8. Intermediate- and long-term earthquake prediction.

    PubMed

    Sykes, L R

    1996-04-30

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study.

  9. Intermediate- and long-term earthquake prediction.

    PubMed Central

    Sykes, L R

    1996-01-01

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. Images Fig. 1 PMID:11607658

  10. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  11. Earthquake frequency-magnitude distribution and fractal dimension in mainland Southeast Asia

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi; Choowong, Montri

    2014-12-01

    The 2004 Sumatra and 2011 Tohoku earthquakes highlighted the need for a more accurate understanding of earthquake characteristics in both regions. In this study, both the a and b values of the frequency-magnitude distribution (FMD) and the fractal dimension ( D C ) were investigated simultaneously from 13 seismic source zones recognized in mainland Southeast Asia (MLSEA). By using the completeness earthquake dataset, the calculated values of b and D C were found to imply variations in seismotectonic stress. The relationships of D C -b and D C -( a/ b) were investigated to categorize the level of earthquake hazards of individual seismic source zones, where the calibration curves illustrate a negative correlation between the D C and b values ( D c = 2.80 - 1.22 b) and a positive correlation between the D C and a/ b ratios ( D c = 0.27( a/ b) - 0.01) with similar regression coefficients ( R 2 = 0.65 to 0.68) for both regressions. According to the obtained relationships, the Hsenwi-Nanting and Red River fault zones revealed low-stress accumulations. Conversely, the Sumatra-Andaman interplate and intraslab, the Andaman Basin, and the Sumatra fault zone were defined as high-tectonic stress regions that may pose risks of generating large earthquakes in the future.

  12. 76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Earthquake Rupture Forecast (UCERF3) in summer 2012, on the recent earthquake and tsunami in Japan, and on....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 1-day meeting on April 16, 2011. The...

  13. A moment-tensor catalog for intermediate magnitude earthquakes in Mexico

    NASA Astrophysics Data System (ADS)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo

    2016-04-01

    Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism

  14. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    USGS Publications Warehouse

    Barton, D.J.; Foulger, G.R.; Henderson, J.R.; Julian, B.R.

    1999-01-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  15. Prospects for earthquake prediction and control

    USGS Publications Warehouse

    Healy, J.H.; Lee, W.H.K.; Pakiser, L.C.; Raleigh, C.B.; Wood, M.D.

    1972-01-01

    The San Andreas fault is viewed, according to the concepts of seafloor spreading and plate tectonics, as a transform fault that separates the Pacific and North American plates and along which relative movements of 2 to 6 cm/year have been taking place. The resulting strain can be released by creep, by earthquakes of moderate size, or (as near San Francisco and Los Angeles) by great earthquakes. Microearthquakes, as mapped by a dense seismograph network in central California, generally coincide with zones of the San Andreas fault system that are creeping. Microearthquakes are few and scattered in zones where elastic energy is being stored. Changes in the rate of strain, as recorded by tiltmeter arrays, have been observed before several earthquakes of about magnitude 4. Changes in fluid pressure may control timing of seismic activity and make it possible to control natural earthquakes by controlling variations in fluid pressure in fault zones. An experiment in earthquake control is underway at the Rangely oil field in Colorado, where the rates of fluid injection and withdrawal in experimental wells are being controlled. ?? 1972.

  16. New data about small-magnitude earthquakes of the ultraslow-spreading Gakkel Ridge, Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Morozov, Alexey N.; Vaganova, Natalya V.; Ivanova, Ekaterina V.; Konechnaya, Yana V.; Fedorenko, Irina V.; Mikhaylova, Yana A.

    2016-01-01

    At the present time there is available detailed bathymetry, gravimetric, magnetometer, petrological, and seismic (mb > 4) data for the Gakkel Ridge. However, so far not enough information has been obtained on the distribution of small-magnitude earthquakes (or microearthquakes) within the ridge area due to the absence of a suitable observation system. With the ZFI seismic station (80.8° N, 47.7° E), operating since 2011 at the Frantz Josef Land Archipelago, we can now register small-magnitude earthquakes down to 1.5 ML within the Gakkel Ridge area. This article elaborates on the results and analysis of the ZFI station seismic monitoring obtained for the period from December 2011 to January 2015. In order to improve the accuracy of the earthquakes epicenter locations, velocity models and regional seismic phase travel-times for spreading ridges in areas within the Euro-Arctic Region have been calculated. The Gakkel Ridge is seismically active, regardless of having the lowest spreading velocity among global mid-ocean ridges. Quiet periods alternate with periods of higher seismic activity. Earthquakes epicenters are unevenly spread across the area. Most of the epicenters are assigned to the Sparsely Magmatic Zone, more specifically, to the area between 1.5° E and 19.0° E. We hypothesize that assignment of most earthquakes to the SMZ segment can be explained by the amagmatic character of the spreading of this segment. The structuring of this part of the ridge is characterized by the prevalence of tectonic processes, not magmatic or metamorphic ones.

  17. Stress drop in the sources of intermediate-magnitude earthquakes in northern Tien Shan

    NASA Astrophysics Data System (ADS)

    Sycheva, N. A.; Bogomolov, L. M.

    2014-05-01

    The paper is devoted to estimating the dynamical parameters of 14 earthquakes with intermediate magnitudes (energy class 11 to 14), which occurred in the Northern Tien Shan. For obtaining the estimates of these parameters, including the stress drop, which could be then applied in crustal stress reconstruction by the technique suggested by Yu.L. Rebetsky (Schmidt Institute of Physics of the Earth, Russian Academy of Sciences), we have improved the algorithms and programs for calculating the spectra of the seismograms. The updated products allow for the site responses and spectral transformations during the propagation of seismic waves through the medium (the effect of finite Q-factor). By applying the new approach to the analysis of seismograms recorded by the seismic KNET network, we calculated the radii of the sources (Brune radius), scalar seismic moment, and stress drop (release) for the studied 14 earthquakes. The analysis revealed a scatter in the source radii and stress drop even among the earthquakes that have almost identical energy classes. The stress drop by different earthquakes ranges from one to 75 bar. We have also determined the focal mechanisms and stress regime of the Earth's crust. It is worth noting that during the considered period, strong seismic events with energy class above 14 were absent within the segment covered by the KNET stations.

  18. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  19. Earthquake Magnitude: A Teaching Module for the Spreadsheets Across the Curriculum Initiative

    NASA Astrophysics Data System (ADS)

    Wetzel, L. R.; Vacher, H. L.

    2006-12-01

    Spreadsheets Across the Curriculum (SSAC) is a library of computer-based activities designed to reinforce or teach quantitative-literacy or mathematics concepts and skills in context. Each activity (called a "module" in the SSAC project) consists of a PowerPoint presentation with embedded Excel spreadsheets. Each module focuses on one or more problems for students to solve. Each student works through a presentation, thinks about the in-context problem, figures out how to solve it mathematically, and builds the spreadsheets to calculate and examine answers. The emphasis is on mathematical problem solving. The intention is for the in- context problems to span the entire range of subjects where quantitative thinking, number sense, and math non-anxiety are relevant. The self-contained modules aim to teach quantitative concepts and skills in a wide variety of disciplines (e.g., health care, finance, biology, and geology). For example, in the Earthquake Magnitude module students create spreadsheets and graphs to explore earthquake magnitude scales, wave amplitude, and energy release. In particular, students realize that earthquake magnitude scales are logarithmic. Because each step in magnitude represents a 10-fold increase in wave amplitude and approximately a 30-fold increase in energy release, large earthquakes are much more powerful than small earthquakes. The module has been used as laboratory and take-home exercises in small structural geology and solid earth geophysics courses with upper level undergraduates. Anonymous pre- and post-tests assessed students' familiarity with Excel as well as other quantitative skills. The SSAC library consists of 27 modules created by a community of educators who met for one-week "module-making workshops" in Olympia, Washington, in July of 2005 and 2006. The educators designed the modules at the workshops both to use in their own classrooms and to make available for others to adopt and adapt at other locations and in other classes

  20. Instrumental magnitude constraints for the 1889 Chilik and the 1887 Verny earthquake, Central Asia

    NASA Astrophysics Data System (ADS)

    Krueger, Frank; Kulikova, Galina; Landgraf, Angela

    2016-04-01

    A series of four large earthquakes hit the continental collision region north of Lake Issyk Kul in the years 1885, 1887, 1889 and 1911 with magnitudes above 6.9. The largest event was the Chilik earthquake on July 11, 1889 with M 8.3 based on macroseismic intensities, recently confirmed by Bindi et al. (2013). Despite the existence of several juvenile fault scarps in the epicentral region no on scale through-going surface rupture has been located. Rupture length of ~200 km and slip of ~10 m are expected for M 8.3 (Blaser et al., 2010). The lack of high concentrated epicentral intensities require a hypocenter depth of 40 km located in the lower crust. Late coda envelope amplitude comparison of modern events in Central Asia recorded at stations in Northern Germany with the reproduction of a Rebeur-Paschwitz pendulum seismogram recorded at Wilhelmshaven results in a magnitude estimate of Mw 8.0-8.5. Amplitude comparison of longperiod surface waves measured on magnetograms at two british geomagnetic observatories favors a magnitude of Mw 8.0. Both can be made consistent if a station site factor of 2-4 for the Wilhelmshaven station is applied (for which indications exist). A truly deep centroid depth (h>40 km) is unlikely (from coda amplitude scaling), a shallow rupture of appropriate length is till now not discovered. Both arguments point to a possible lower crust contribution to the seismic moment. Magnetogram amplitudes for the Jun 8, 1887, Verny earthquake point to a magnitude of M ~7.5-7.6 (preliminary).

  1. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  2. A radon detector for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Dacey, James

    2010-04-01

    Recent events in Haiti and Chile remind us of the devastation that can be wrought by an earthquake, especially when it strikes without warning. For centuries, people living in seismically active regions have reported a number of strange occurrences immediately prior to a quake, including unexpected weather phenomena and even unusual behaviour among animals. In more recent times, some scientists have suggested other precursors, such as sporadic bursts of electromagnetic radiation from the fault zone. Unfortunately, none of these suggestions has led to a robust, scientific method for earthquake prediction. Now, however, a group of physicists, led by physics Nobel laureate Georges Charpak, has developed a new detector that could measure one of the more testable earthquake precursors - the suggestion that radon gas is released from fault zones prior to earth slipping, writes James Dacey.

  3. Real-Time Estimation of Earthquake Location, Magnitude and Rapid Shake map Computation for the Campania Region, Southern Italy

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Convertito, V.; de Matteis, R.; Iannaccone, G.; Lancieri, M.; Lomax, A.; Satriano, C.

    2005-12-01

    introducing an evolutionary strategy which is aimed at obtaining a more and more refined estimate of the maximum probability volume as the time goes on. The real time magnitude estimate will take advantage from the high spatial density of the network in the source region and the wide dynamic range of installed instruments. Based on the offline analysis of high quality strong-motion data bases recorded in Italy and worldwide, several methods will be checked and validated , using different observed quantities (peak amplitude, dominant frequency, square velocity integral, .) to be measured on seismograms, as a function of time. Following the ElarmS methodology (Allen,2004), peak ground attenuation relations can be used to predict the distribution of maximum ground shaking, as updated estimates of earthquake location and magnitude are progressively available from the Early Warning system starting from the time of first P-wave detection. As measurements of peak ground quantities for the current earthquake become available from the network, these values are progressively used to adjust an "ad hoc" determined attenuation relation for the Campania region using the stochastic approach proposed by Boore (1993).

  4. Earthquake prediction in seismogenic areas of the Iberian Peninsula based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.

    2013-05-01

    A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity

  5. Earthquakes of moderate magnitude recorded at the Salt Lake paleoseimic site on the Haiyuan Fault, China

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Shao, Yanxiu; Xie, Kejia; Klinger, Yann; Lei, Zhongsheng; Yuan, Daoyang

    2013-04-01

    The active left-lateral Haiyuan fault is one of the major continental strike-slip faults in the Tibetan Plateau. The last large earthquake occurred on the fault is the great 1920 M~8 Haiyuan earthquake with a 230-km-long surface rupture and maximum surface slip of 11 m (Zhang et al., 1987). Much less known is its earthquake recurrence behavior. We present preliminary results on a paleoseismic study at the Salt Lake site, at a shortcut pull-apart basin, within the section that broke in 1920. 3D excavation at the site exposed 7 m of fine-grained and layered stratigraphy and ample evidence of 6-7 paleoseismic events. AMS dating of charcoal fragments constrains that the events occurred during the past 3600 years. Of these, the youngest 3-4 events are recorded in the top 2.5m section of distinctive thinly-layered stratigraphy of alternating reddish well-sorted granule sand and light gray silty fine sand. The section has been deposited since ~1550 A.D., suggesting 3-4 events occurred during the past 400 years, and an average recurrence interval of less than 150 years, surprisingly short for the Haiyuan fault, with a slip rate of arguably ~10 mm/yr or less. A comparison of paleoseismic with historical earthquake record is possible for the Haiyuan area, a region with written accounts of earthquake effects dated back to 1000 A.D.. Between 1600 A.D. and present, each of the four paleoseismic events can be correlated to one historically recorded event, within the uncertainties of paleoseismic age ranges. Nonetheless, these events are definitely not 1920-type large earthquakes, because their shaking effects were only recorded locally, rather than regionally. More and more studies show that M5 to 6 events are capable of causing ground deformation. Our results indicate that it can be misleading to simply use the time between consecutive events as the recurrence interval at a single paleoseismic site, without information of event size. Mixed events of different magnitudes in the

  6. Influence of weak motion data to magnitude dependence of PGA prediction model in Austria

    NASA Astrophysics Data System (ADS)

    Jia, Yan

    2015-04-01

    Data recorded by the STS2-sensors at the Austrian Seismic Network were differentiated and used to derive the PGA prediction model for Austria (Jia and Lenhardt, 2010). Before using it to our hazard assessment and real time shakemap, it is necessary to validate this model and obtain a deep understanding about it. In this paper, influence of weak motion data to the magnitude dependence of our prediction model was studied. In addition, spatial PGA residuals between the measurements and predictions were investigated as well. There are 127 earthquakes with a magnitude between 3 and 5.4 that were used to derive the PGA prediction model published in 2011. Unfortunately, 90% of used PGA measurements were made for the events with a magnitude smaller than 4. Only ten quakes among them have a magnitude larger than 4, which is the important magnitude range that needs our attention and hazard assessment. In this investigation, 127 earthquakes were divided into two groups: the first group only includes events with a magnitude smaller than 4, while the second group contains quakes with a magnitude larger than 4. By using the same modeling for estimating PGA attenuation in 2011, coefficients of the model were inverted from the measurements in two groups and compared to the one based on the complete data set. It was found that the group with the weak quakes returned results that only have small differences to the one from all 127 events, while the group with strong quakes (ml> 4) gave greater magnitude dependence than the model published in 2011. The distance coefficients stayed nearly unchanged for all three inversions. As the second step, spatial PGA residuals between the measurements and the predictions from our model were investigated. As explained in Jia and Lenhardt (2013), there are some differences in the site amplifications between the West- and the East-Austria. For a fair comparison, residuals were normalized for each station before the investigation. Then normalized

  7. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  8. Testing time-predictable earthquake recurrence by direct measurement of strain accumulation and release.

    PubMed

    Murray, Jessica; Segall, Paul

    2002-09-19

    Probabilistic estimates of earthquake hazard use various models for the temporal distribution of earthquakes, including the 'time-predictable' recurrence model formulated by Shimazaki and Nakata (which incorporates the concept of elastic rebound described as early as 1910 by H. F. Reid). This model states that an earthquake occurs when the fault recovers the stress relieved in the most recent earthquake. Unlike time-independent models (for example, Poisson probability), the time-predictable model is thought to encompass some of the physics behind the earthquake cycle, in that earthquake probability increases with time. The time-predictable model is therefore often preferred when adequate data are available, and it is incorporated in hazard predictions for many earthquake-prone regions, including northern California, southern California, New Zealand and Japan. Here we show that the model fails in what should be an ideal locale for its application -- Parkfield, California. We estimate rigorous bounds on the predicted recurrence time of the magnitude approximately 6 1966 Parkfield earthquake through inversion of geodetic measurements and we show that, according to the time-predictable model, another earthquake should have occurred by 1987. The model's poor performance in a relatively simple tectonic setting does not bode well for its successful application to the many areas of the world characterized by complex fault interactions.

  9. Earthquake prediction activities and Damavand earthquake precursor test site in Iran

    NASA Astrophysics Data System (ADS)

    Mokhtari, Mohammad

    2010-01-01

    Iran has long been known as one of the most seismically active areas of the world, and it frequently suffers destructive and catastrophic earthquakes that cause heavy loss of human life and widespread damage. The Alborz region in the northern part of Iran is an active EW trending mountain belt of 100 km wide and 600 km long. The Alborz range is bounded by the Talesh Mountains to the west and the Kopet Dagh Mountains to the east and consists of several sedimentary and volcanic layers of Cambrian to Eocene ages that were deformed during the late Cenozoic collision. Several active faults affect the central Alborz. The main active faults are the North Tehran and Mosha faults. The Mosha fault is one of the major active faults in the central Alborz as shown by its strong historical seismicity and its clear morphological signature. Situated in the vicinity of Tehran city, this 150-km-long N100° E trending fault represents an important potential seismic source. For earthquake monitoring and possible future prediction/precursory purposes, a test site has been established in the Alborz mountain region. The proximity to the capital of Iran with its high population density, low frequency but high magnitude earthquake occurrence, and active faults with their historical earthquake events have been considered as the main criteria for this selection. In addition, within the test site, there are hot springs and deep water wells that can be used for physico-chemical and radon gas analysis for earthquake precursory studies. The present activities include magnetic measurements; application of methodology for identification of seismogenic nodes for earthquakes of M ≥ 6.0 in the Alborz region developed by International Institute of Earthquake Prediction Theory and Mathematical Geophysics, IIEPT RAS, Russian Academy of Science, Moscow (IIEPT&MG RAS); a feasibility study using a dense seismic network for identification of future locations of seismic monitoring stations and application

  10. Extreme magnitude earthquakes and their economical impact: The Mexico City case

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Mario, C.

    2005-12-01

    The consequences (estimated by the human and economical losses) of the recent occurrence (worldwide) of extreme magnitude (for the region under consideration) earthquakes, such as the 19 09 1985 in Mexico (Ritchter magnitude Ms 8.1, moment magnitude Mw 8.01), or the one in Indonesia of the 26 12 2004 (Ms 9.4, Mw 9.3), stress the importance of performing seismic hazard analysis that, specifically, incorporate this possibility. Herewith, we present and apply a methodology, based on plausible extreme seismic scenarios and the computation of their associated synthetic accelerograms, to estimate the seismic hazard on Mexico City (MC) stiff and compressible surficial soils. The uncertainties about the characteristics of the potential finite seismic sources, as well as those related to the dynamic properties of MC compressible soils are taken into account. The economic consequences (i.e. the seismic risk = seismic hazard x economic cost) implicit in the seismic coefficients proposed in MC seismic Codes before (1976) and after the 1985 earthquake (2004) are analyzed. Based on the latter and on an acceptable risk criterion, a maximum seismic coefficient (MSC) of 1.4g (g = 9.81m/s2) of the elastic acceleration design spectra (5 percent damping), which has a probability of exceedance of 2.4 x 10-4, seems to be appropriate for analyzing the seismic behavior of infrastructure located on MC compressible soils, if extreme Mw 8.5 subduction thrust mechanism earthquakes (similar to the one occurred on 19 09 1985 with an observed, equivalent, MSC of 1g) occurred in the next 50 years.

  11. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  12. Is It Possible to Predict Strong Earthquakes?

    NASA Astrophysics Data System (ADS)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  13. A local earthquake coda magnitude and its relation to duration, moment M sub O, and local Richter magnitude M sub L

    NASA Technical Reports Server (NTRS)

    Suteau, A. M.; Whitcomb, J. H.

    1977-01-01

    A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.

  14. On the earthquake predictability of fault interaction models

    PubMed Central

    Marzocchi, W; Melini, D

    2014-01-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643

  15. The 2009 earthquake, magnitude mb 4.8, in the Pantanal Wetlands, west-central Brazil.

    PubMed

    Dias, Fábio L; Assumpção, Marcelo; Facincani, Edna M; França, George S; Assine, Mario L; Paranhos, Antônio C; Gamarra, Roberto M

    2016-09-01

    The main goal of this paper is to characterize the Coxim earthquake occurred in June 15th, 2009 in the Pantanal Basin and to discuss the relationship between its faulting mechanism with the Transbrasiliano Lineament. The earthquake had maximum intensity MM V causing damage in farm houses and was felt in several cities located around, including Campo Grande and Goiânia. The event had an mb 4.8 magnitude and depth was 6 km, i.e., it occurred in the upper crust, within the basement and 5 km below the Cenozoic sedimentary cover. The mechanism, a thrust fault mechanism with lateral motion, was obtained by P-wave first-motion polarities and confirmed by regional waveform modelling. The two nodal planes have orientations (strike/dip) of 300°/55° and 180°/55° and the orientation of the P-axis is approximately NE-SW. The results are similar to the Pantanal earthquake of 1964 with mb 5.4 and NE-SW compressional axis. Both events show that Pantanal Basin is a seismically active area, under compressional stress. The focal mechanism of the 1964 and 2009 events have no nodal plane that could be directly associated with the main SW-NE trending Transbrasiliano system indicating that a direct link of the Transbrasiliano with the seismicity in the Pantanal Basin is improbable.

  16. The 2009 earthquake, magnitude mb 4.8, in the Pantanal Wetlands, west-central Brazil.

    PubMed

    Dias, Fábio L; Assumpção, Marcelo; Facincani, Edna M; França, George S; Assine, Mario L; Paranhos, Antônio C; Gamarra, Roberto M

    2016-09-01

    The main goal of this paper is to characterize the Coxim earthquake occurred in June 15th, 2009 in the Pantanal Basin and to discuss the relationship between its faulting mechanism with the Transbrasiliano Lineament. The earthquake had maximum intensity MM V causing damage in farm houses and was felt in several cities located around, including Campo Grande and Goiânia. The event had an mb 4.8 magnitude and depth was 6 km, i.e., it occurred in the upper crust, within the basement and 5 km below the Cenozoic sedimentary cover. The mechanism, a thrust fault mechanism with lateral motion, was obtained by P-wave first-motion polarities and confirmed by regional waveform modelling. The two nodal planes have orientations (strike/dip) of 300°/55° and 180°/55° and the orientation of the P-axis is approximately NE-SW. The results are similar to the Pantanal earthquake of 1964 with mb 5.4 and NE-SW compressional axis. Both events show that Pantanal Basin is a seismically active area, under compressional stress. The focal mechanism of the 1964 and 2009 events have no nodal plane that could be directly associated with the main SW-NE trending Transbrasiliano system indicating that a direct link of the Transbrasiliano with the seismicity in the Pantanal Basin is improbable. PMID:27580359

  17. Collective properties of injection-induced earthquake sequences: 2. Spatiotemporal evolution and magnitude frequency distributions

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny; Huang, Yihe

    2016-05-01

    Probabilistic seismic hazard assessment for induced seismicity depends on reliable estimates of the locations, rate, and magnitude frequency properties of earthquake sequences. The purpose of this paper is to investigate how variations in these properties emerge from interactions between an evolving fluid pressure distribution and the mechanics of rupture on heterogeneous faults. We use an earthquake sequence model, developed in the first part of this two-part series, that computes pore pressure evolution, hypocenter locations, and rupture lengths for earthquakes triggered on 1-D faults with spatially correlated shear stress. We first consider characteristic features that emerge from a range of generic injection scenarios and then focus on the 2010-2011 sequence of earthquakes linked to wastewater disposal into two wells near the towns of Guy and Greenbrier, Arkansas. Simulations indicate that one reason for an increase of the Gutenberg-Richter b value for induced earthquakes is the different rates of reduction of static and residual strength as fluid pressure rises. This promotes fault rupture at lower stress than equivalent tectonic events. Further, b value is shown to decrease with time (the induced seismicity analog of b value reduction toward the end of the seismic cycle) and to be higher on faults with lower initial shear stress. This suggests that faults in the same stress field that have different orientations, and therefore different levels of resolved shear stress, should exhibit seismicity with different b-values. A deficit of large-magnitude events is noted when injection occurs directly onto a fault and this is shown to depend on the geometry of the pressure plume. Finally, we develop models of the Guy-Greenbrier sequence that captures approximately the onset, rise and fall, and southwest migration of seismicity on the Guy-Greenbrier fault. Constrained by the migration rate, we estimate the permeability of a 10 m thick critically stressed basement

  18. 77 FR 53225 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: Department of the... National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\ day meeting on September 17 and 18, 2012, at the U.S. Geological Survey National Earthquake Information Center (NEIC),...

  19. Spatial variations in the frequency-magnitude distribution of earthquakes at Mount Pinatubo volcano

    USGS Publications Warehouse

    Sanchez, J.J.; McNutt, S.R.; Power, J.A.; Wyss, M.

    2004-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is mapped in two and three dimensions at Mount Pinatubo, Philippines, to a depth of 14 km below the summit. We analyzed 1406 well-located earthquakes with magnitudes MD ???0.73, recorded from late June through August 1991, using the maximum likelihood method. We found that b-values are higher than normal (b = 1.0) and range between b = 1.0 and b = 1.8. The computed b-values are lower in the areas adjacent to and west-southwest of the vent, whereas two prominent regions of anomalously high b-values (b ??? 1.7) are resolved, one located 2 km northeast of the vent between 0 and 4 km depth and a second located 5 km southeast of the vent below 8 km depth. The statistical differences between selected regions of low and high b-values are established at the 99% confidence level. The high b-value anomalies are spatially well correlated with low-velocity anomalies derived from earlier P-wave travel-time tomography studies. Our dataset was not suitable for analyzing changes in b-values as a function of time. We infer that the high b-value anomalies around Mount Pinatubo are regions of increased crack density, and/or high pore pressure, related to the presence of nearby magma bodies.

  20. Recurrence quantification analysis for detecting dynamical changes in earthquake magnitude time series

    NASA Astrophysics Data System (ADS)

    Lin, Min; Zhao, Gang; Wang, Gang

    2015-12-01

    In this study, recurrence plot (RP) and recurrence quantification analysis (RQA) techniques are applied to a magnitude time series composed of seismic events occurred in California region. Using bootstrapping techniques, we give the statistical test of the RQA for detecting dynamical transitions. From our results, we find the different patterns of RPs for magnitude time series before and after the M6.1 Joshua Tree Earthquake. RQA measurements of determinism (DET) and laminarity (LAM) quantifying the order with confidence levels also show peculiar behaviors. It is found that DET and LAM values of the recurrence-based complexity measure significantly increase to a large value at the main shock, and then gradually recovers to a small values after it. The main shock and its aftershock sequences trigger a temporary growth in order and complexity of the deterministic structure in the RP of seismic activity. It implies that the onset of the strong earthquake event is reflected in a sharp and great simultaneous change in RQA measures.

  1. Magnitudes and Moment-Duration Scaling of Low-Frequency Earthquakes Beneath Southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A.; Rubin, A. M.; Savard, G.; Chuang, L. Y.

    2015-12-01

    We employ 130 low-frequency-earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from 100's to 1000's of individual LFEs, representing over 300,000 independent detections from major episodic-tremor-and- slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P- and S-waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatio-temporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free-surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single ETS template. The spatio-temporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 hours of LFE activity during an ETS episode when tidal sensitity is low. The remainder is released in bursts over several days, particularly as spatially extensive RTRs, during which tidal sensitivity is high. RTR's are characterized by large magnitude LFEs, and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power-law than exponential distributions although they exhibit very high b-values ≥ 6. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges MW<1.5, MW≥ 2.0. LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in dimension and that moment variation is dominated by slip. This behaviour implies

  2. A Comprehensive Mathematical Model for the Correlation of Earthquake Magnitude with Geochemical Measurements. A Case Study: the Nisyros Volcano in Greece

    SciTech Connect

    Verros, G. D.; Latsos, T.; Liolios, C.; Anagnostou, K. E.

    2009-08-13

    A comprehensive mathematical model for the correlation of geological phenomena such as earthquake magnitude with geochemical measurements is presented in this work. This model is validated against measurements, well established in the literature, of {sup 220}Rn/{sup 222}Rn in the fumarolic gases of the Nisyros Island, Aegean Sea, Greece. It is believed that this model may be further used to develop a generalized methodology for the prediction of geological phenomena such as earthquakes and volcanic eruptions in the vicinity of the Nisyros Island.

  3. Maximum Magnitude and Probabilities of Induced Earthquakes in California Geothermal Fields: Applications for a Science-Based Decision Framework

    NASA Astrophysics Data System (ADS)

    Weiser, Deborah Anne

    Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.

  4. Re-examination of Magnitude of the AD 869 Jogan Earthquake, a Possible Predecessor of the 2011 Tohoku Earthquake, from Tsunami Deposit Distribution and Computed Inundation Distances

    NASA Astrophysics Data System (ADS)

    Namegaya, Y.; Satake, K.

    2012-12-01

    We re-examined the magnitude of the AD 869 Jogan earthquake by comparing the inland limit of tsunami deposit and computed inundation distance for various fault models. The 869 tsunami deposit is distributed 3-4 km inland from the estimated past shorelines in Ishinomaki and Sendai plains (Shishikura et al., 2007, Annual Report on Active Fault and Paleoearthquake Researches; Sawai et al., 2007 ibid). In the previous studies (Satake et al., 2008 and Namegaya et al. 2010, ibid), we assumed 14 fault models of the Jogan earthquake including outer-rise normal fault, tsunami earthquake, interplate earthquakes, and an active fault in Sendai bay. The computed inundation area from an interplate earthquake with Mw of 8.4 (length: 200 km, width: 100 km, slip 7 m) covers the distribution of tsunami deposits in Ishinomaki and Sendai plains. However, the previous studies yielded the minimum magnitude, because we assumed that the inland limit of tsunami deposits and the computed inundation limit were the same. A post-2011 field survey indicate that the 2011 tsunami inundation distance was about 1.6 times the inland limit of tsunami deposits (e.g. Goto et al., 2011, Marine Geology). In this study, we computed tsunami inundation areas from interplate earthquake with different magnitude, fault length, and slip amount. The moment magnitude ranges from 8.0 to 8.7, the fault length ranges from 100 to 400 km, and the slip ranged from 3 to 9 m. The fault width is fixed at 100 km. The distance ratios of computed inundation to the inland limit of tsunami deposit (Inundation to Deposit Ratio or IDR) were calculated along 8 transects on Sendai and Ishinomaki plains. The results show that IDR increases with magnitude, up to Mw=8.4, when IDR becomes one, or the computed inundation is almost the same as the inland limit of tsunami deposit. IDR increases for a larger magnitude, but at a much smaller rate. This confirms that the magnitude of the 869 Jogan earthquake was at least 8.4, but it could

  5. Persistency of rupture directivity in moderate-magnitude earthquakes in Italy: Implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Rovelli, A.; Calderoni, G.

    2012-12-01

    A simple method based on the EGF deconvolution in the frequency domain is applied to detect the occurrence of unilateral ruptures in recent damaging earthquakes in Italy. The spectral ratio between event pairs with different magnitudes at individual stations shows large azimuthal variations above corner frequency when the target event is affected by source directivity and the EGF is not or vice versa. The analysis is applied to seismograms and accelerograms recorded during the seismic sequence following the 20 May 2012, Mw 5.6 main shock in Emilia, northern Italy, the 6 April 2009, Mw 6.1 earthquake of L'Aquila, central Italy, and the 26 September 1997, Mw 5.7 and 6.0 shocks in Umbria-Marche, central Italy. Events of each seismic sequence are selected as having consistent focal mechanisms, and the station selection obeys to the constraint of a similar source-to-receiver path for the event pairs. The analyzed data set of L'Aquila consists of 962 broad-band seismograms relative to 69 normal-faulting earthquakes (3.3 ≤ MW ≤ 6.1, according to Herrmann et al., 2011), stations are selected in the distance range 100 to 250 km to minimize differences in propagation paths. The seismogram analysis reveals that a strong along-strike (toward SE) source directivity characterized all of the three Mw > 5.0 shocks. Source directivity was also persistent up to the smallest magnitudes: 65% of earthquakes under study showed evidence of directivity toward SE whereas only one (Mw 3.7) event showed directivity in the opposite direction. Also the Mw 5.6 main shock of the 20 May 2012 in Emilia result in large azimuthal spectral variations indicating unilateral rupture propagation toward SE. According to the reconstructed geometry of the trust-fault plane, the inferred directivity direction suggests top-down rupture propagation. The analysis over the Emilia aftershock sequence is in progress. The third seismic sequence, dated 1997-1998, occurred in the northern Apennines and, similarly

  6. Location and local magnitude of the Tocopilla earthquake sequence of Northern Chile

    NASA Astrophysics Data System (ADS)

    Fuenzalida, A.; Lancieri, M.; Madariaga, R. I.; Sobiesiak, M.

    2010-12-01

    The Northern Chile gap is generally considered to the site of the next megathurst event in Chile. The Tocopilla earthquake of 14 November 2007 (Mw 7.8) and aftershock series broke the southern end of this gap. The Tocopilla event ruptured a narrow strip of 120 km of length and a width that (Peyrat et al.; Delouis et al. 2009) estimated as 30 km. The aftershock sequence comprises five large thrust events with magnitude greater than 6. The main aftershock of Mw 6.7 occurred on November 15, at 15:06 (UTM) seawards of the Mejillones Peninsula. One month later, on December 16 2007, a strong (Mw 6.8) intraplate event with slab-push mechanism occurred near the bottom of the rupture zone. These events represent a unique opportunity for the study of earthquakes in Northern Chile because of the quantity and quality of available data. In the epicentral area, the IPOC network was deployed by GFZ, CNRS/INSU and DGF before the main event. This is a digital, continuously recording network, equipped with both strong-motion and broad-band instrument. On 29 November 2007 a second network named “Task Force” (TF) was deployed by GFZ to study the aftershocks. This is a dense network, installed near the Mejillones peninsula. It is composed by 20 short-period instruments. The slab-push event of 16 december 2007 occurred in the middle of the area covered by the TF network. Aftershocks were detected using an automatic procedure and manually revised in order to pick P and S arrivals. In the 14-28 November period, we detected 635 events recorded at the IPOC network; and a further 552 events were detected between 29 November and 16 December before the slab-push event using the TF network. The events were located using a vertically layered velocity model (Husen et al. 1999), using the NLLoc software of Lomax et al. From the broadband data we estimated the moment magnitude from the displacement spectra of the events. From the short-period instruments we evaluated local magnitudes using the

  7. Prediction of Earthquakes by Lunar Cicles

    NASA Astrophysics Data System (ADS)

    Rodriguez, G.

    2007-05-01

    Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme

  8. An evaluation of the seismic- window theory for earthquake prediction.

    USGS Publications Warehouse

    McNutt, M.; Heaton, T.H.

    1981-01-01

    Reports studies designed to determine whether earthquakes in the San Francisco Bay area respond to a fortnightly fluctuation in tidal amplitude. It does not appear that the tide is capable of triggering earthquakes, and in particular the seismic window theory fails as a relevant method of earthquake prediction. -J.Clayton

  9. Trans Alaska Pipeline Design Accommodates November 3, 2002, Magnitude 7.9 Earthquake

    NASA Astrophysics Data System (ADS)

    Cluff, L. S.; Slemmons, D. B.

    2002-12-01

    During the early 1970s, a 48-inch-diameter pipeline was proposed to bring crude oil from Prudhoe Bay to the Port of Valdez, Alaska, traversing 1280 km of spectacular wilderness country, three mountain ranges, and four active faults. Detailed fault rupture evaluations were completed by a team of earthquake geologists, led by co-Project Directors Lloyd S. Cluff and David B. Slemmons, for the Alyeska Pipeline Service Company. The comprehensive studies of the entire proposed pipeline route concluded that four active faults would require special design to protect the integrity of the pipeline. The Denali fault, the most active of the three, which traverses east-west near the center of the Alaska Range, was determined to be the most dangerous. The Denali fault was assessed to have the potential of releasing a magnitude 8.0 earthquakes due to a rupture estimated to extend more than 250 km, with surface rupture ranging from a few feet to a maximum of 30 feet horizontal and 8 feet vertical. The recommended design at the pipeline fault crossing was 20 feet horizontal and 5 feet vertical. The design engineers, Nathan M. Newmark, William J. Hall, and Jim Maple, assisted by Douglas Nyman, the pipeline's seismic design coordinator, developed an innovative design consisting of very long concrete footings coated with Teflon that would allow the footings to move beneath the pipeline and the pipeline to slide freely, extending, compressing, or shifting laterally to accommodate the expected fault rupture. On November 3, the M 7.9 earthquake on the Denali fault ruptured west to east along strike for at least 270 km. At the pipeline fault crossing, surface displacement and related fault deformation of 12.5 feet horizontal and 2.5 feet vertical occurred. The rupture caused the pipe to slide sideways on the Teflon-coated footings without loosing its structural integrity or spilling oil. There were some areas of minor damage, but the pipeline was resilient and performed as the design

  10. Moment magnitude, local magnitude and corner frequency of small earthquakes nucleating along a low angle normal fault in the Upper Tiber valley (Italy)

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Chiaraluce, L.; Valoroso, L.

    2015-12-01

    The relation between moment magnitude (MW) and local magnitude (ML) is still a debated issue (Bath, 1966, 1981; Ristau et al., 2003, 2005). Theoretical considerations and empirical observations show that, in the magnitude range between 3 and 5, MW and ML scale 1∶1. Whilst for smaller magnitudes this 1∶1 scaling breaks down (Bethmann et al. 2011). For accomplishing this task we analyzed the source parameters of about 1500 (30.000 waveforms) well-located small earthquakes occurred in the Upper Tiber Valley (Northern Apennines) in the range of -1.5≤ML≤3.8. In between these earthquakes there are 300 events repeatedly rupturing the same fault patch generally twice within a short time interval (less than 24 hours; Chiaraluce et al., 2007). We use high-resolution short period and broadband recordings acquired between 2010 and 2014 by 50 permanent seismic stations deployed to monitor the activity of a regional low angle normal fault (named Alto Tiberina fault, ATF) in the framework of The Alto Tiberina Near Fault Observatory project (TABOO; Chiaraluce et al., 2014). For this study the direct determination of MW for small earthquakes is essential but unfortunately the computation of MW for small earthquakes (MW < 3) is not a routine procedure in seismology. We apply the contributions of source, site, and crustal attenuation computed for this area in order to obtain precise spectral corrections to be used in the calculation of small earthquakes spectral plateaus. The aim of this analysis is to achieve moment magnitudes of small events through a procedure that uses our previously calibrated crustal attenuation parameters (geometrical spreading g(r), quality factor Q(f), and the residual parameter k) to correct for path effects. We determine the MW-ML relationships in two selected fault zones (on-fault and fault-hanging-wall) of the ATF by an orthogonal regression analysis providing a semi-automatic and robust procedure for moment magnitude determination within a

  11. Field survey of earthquake effects from the magnitude 4.0 southern Maine earthquake of October 16, 2012

    USGS Publications Warehouse

    Amy L. Radakovich,; Alex J. Fergusen,; Boatwright, John

    2016-06-02

    The magnitude 4.0 earthquake that occurred on October 16, 2012, near Hollis Center and Waterboro in southwestern Maine surprised and startled local residents but caused only minor damage. A two-person U.S. Geological Survey (USGS) team was sent to Maine to conduct an intensity survey and document the damage. The only damage we observed was the failure of a chimney and plaster cracks in two buildings in East and North Waterboro, 6 kilometers (km) west of the epicenter. We photographed the damage and interviewed residents to determine the intensity distribution in the epicentral area. The damage and shaking reports are consistent with a maximum Modified Mercalli Intensity (MMI) of 5–6 for an area 1–8 km west of the epicenter, slightly higher than the maximum Community Decimal Intensity (CDI) of 5 determined by the USGS “Did You Feel It?” Web site. The area of strong shaking in East Waterboro corresponds to updip rupture on a fault plane that dips steeply east. 

  12. Material contrast does not predict earthquake rupture propagation direction

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    2005-01-01

    Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

  13. Generalized multidimensional earthquake frequency distributions consistent with Non-Extensive Statistical Physics: An appraisal of the universality in the interdependence of magnitude, interevent time and interevent distance

    NASA Astrophysics Data System (ADS)

    Tzanis, Andreas; Vallianatos, Philippos; Efstathiou, Angeliki

    2013-04-01

    It is well known that earthquake frequency is related to earthquake magnitude via a simple linear relationship of the form logN = a - bM, where N is the number of earthquakes in a specified time interval; this is the famous Gutenberg - Richter (G-R) law. The generally accepted interpretation of the G-R law is that it expresses the statistical behaviour of a fractal active tectonic grain (active faulting). The relationship between the constant b and the fractal dimension of the tectonic grain has been demonstrated in various ways. The story told by the G-R law is, nevertheless, incomplete. It is now accepted that the active tectonic grain comprises a critical complex system, although it hasn't yet been established whether it is stationary (Self-Organized Critical), evolutionary (Self-Organizing Critical), or a time-varying blend of both. At any rate, critical systems are characterized by complexity and strong interactions between near and distant neighbours. This, in turn, implies that the self-organization of earthquake occurrence should be manifested by certain statistical behaviour of its temporal and spatial dependence. A strong line of evidence suggests that G-R law is a limiting case of a more general frequency-magnitude distribution, which is properly expressed in terms of Non-Extensive Statistical Physics (NESP) on the basis of the Tsallis entropy; this is a context natural and particularly suitable for the description of complex systems. A measure of temporal dependence in earthquake occurrence is the time lapsed between consecutive events above a magnitude threshold over a given area (interevent time). A corresponding measure of spatial dependence is the hypocentral distance between consecutive events above a magnitude threshold over a given area (interevent distance). The statistics of earthquake frequency vs. interevent time have been studied by several researchers and have been shown to comply with the predictions of the NESP formalism. There's also

  14. An earthquake-like magnitude-frequency distribution of slow slip in northern Cascadia

    NASA Astrophysics Data System (ADS)

    Wech, Aaron G.; Creager, Kenneth C.; Houston, Heidi; Vidale, John E.

    2010-11-01

    Major episodic tremor and slip (ETS) events with Mw 6.4 to 6.7 repeat every 15 ± 2 months within the Cascadia subduction zone under the Olympic Peninsula. Although these major ETS events are observed to release strain, smaller “tremor swarms” without detectable geodetic deformation are more frequent. An automatic search from 2006-2009 reveals 20,000 five-minute windows containing tremor which cluster in space and time into 96 tremor swarms. The 93 inter-ETS tremor swarms account for 45% of the total duration of tremor detection during the last three ETS cycles. The number of tremor swarms, N, exceeding duration τ follow a power-law distribution N $\\propto$ τ-0.66. If duration is proportional to moment release, the slip inferred from these swarms follows a standard Gutenberg-Richter logarithmic frequency-magnitude relation, with the major ETS events and smaller inter-ETS swarms lying on the same trend. This relationship implies that 1) inter-ETS slip is fundamentally similar to the major events, just smaller and more frequent; and 2) despite fundamental differences in moment-duration scaling, the slow slip magnitude-frequency distribution is the same as normal earthquakes with a b-value of 1.

  15. Magnitude Uncertainty and Ground Motion Simulations of the 1811-1812 New Madrid Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Graves, R. W.; Olsen, K. B.; Boyd, O. S.; Hartzell, S.; Ni, S.; Somerville, P. G.; Williams, R. A.; Zhong, J.

    2011-12-01

    We present a study of a set of three-dimensional earthquake simulation scenarios in the New Madrid Seismic Zone (NMSZ). This is a collaboration among three simulation groups with different numerical modeling approaches and computational capabilities. The study area covers a portion of the Central United States (~400,000 km2) centered on the New Madrid seismic zone, which includes several metropolitan areas such as Memphis, TN and St. Louis, MO. We computed synthetic seismograms to a frequency of 1Hz by using a regional 3D velocity model (Ramirez-Guzman et al., 2010), two different kinematic source generation approaches (Graves et al., 2010; Liu et al., 2006) and one methodology where sources were generated using dynamic rupture simulations (Olsen et al., 2009). The set of 21 hypothetical earthquakes included different magnitudes (Mw 7, 7.6 and 7.7) and epicenters for two faults associated with the seismicity trends in the NMSZ: the Axial (Cottonwood Grove) and the Reelfoot faults. Broad band synthetic seismograms were generated by combining high frequency synthetics computed in a one-dimensional velocity model with the low frequency motions at a crossover frequency of 1 Hz. Our analysis indicates that about 3 to 6 million people living near the fault ruptures would experience Mercalli intensities from VI to VIII if events similar to those of the early nineteenth century occurred today. In addition, the analysis demonstrates the importance of 3D geologic structures, such as the Reelfoot Rift and the Mississippi Embayment, which can channel and focus the radiated wave energy, and rupture directivity effects, which can strongly amplify motions in the forward direction of the ruptures. Both of these effects have a significant impact on the pattern and level of the simulated intensities, which suggests an increased uncertainty in the magnitude estimates of the 1811-1812 sequence based only on historic intensity reports. We conclude that additional constraints such as

  16. Current affairs in earthquake prediction in Japan

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    2015-12-01

    As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.

  17. A Study of Low-Frequency Earthquake Magnitudes in Northern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Chuang, L. Y.; Bostock, M. G.

    2015-12-01

    Tectonic tremor and low frequency earthquakes (LFE) have been extensively studied in recent years in northern Washington and southern Vancouver Island (VI). However, far less attention has been directed to northern VI where the behavior of tremor and LFEs is less well documented. We investigate LFE properties in this latter region by assembling templates using data from the POLARIS-NVI and Sea-JADE experiments. The POLARIS-NVI experiment comprised 27 broadband seismometers arranged along two mutually perpendicular arms with an aperture of ~60 km centered near station WOS (lat. 50.16, lon. -126.57). It recorded two ETS events in June 2006 and May 2007, each with duration less than a week. For these two episodes, we constructed 68 independent, high signal to noise ratio LFE templates representing spatially distinct asperities on the plate boundary in NVI, along with a catalogue of more than 30 thousand detections. A second data set is being prepared for the complementary 2014 Sea-JADE data set. The precisely located LFE templates represent simple direct P-waves and S-waves at many stations thereby enabling magnitude estimation of individual detections. After correcting for radiation pattern, 1-D geometrical spreading, attenuation and free-surface magnification, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single LFE template. LFE magnitudes range up to 2.54, and like southern VI are characterized by high b-values (b~8). In addition, we will quantify LFE moment-duration scaling and compare with southern Vancouver Island where LFE moments appear to be controlled by slip, largely independent of fault area.

  18. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  19. Exploring Earthquake Databases for the Creation of Magnitude-Homogeneous Catalogues: Tools for Application on a Regional and Global Scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-06-01

    The creation of a magnitude-homogenised catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenising multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins, and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilise this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonise magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonised into moment-magnitude to form a catalogue of more than 562,840 events. This extended catalogue, whilst not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  20. A simple approach to estimate earthquake magnitude from the arrival time of the peak acceleration amplitude

    NASA Astrophysics Data System (ADS)

    Noda, S.; Yamamoto, S.

    2014-12-01

    In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.

  1. Seismic Versus Aseismic Slip and Maximum Induced Earthquake Magnitude in Models of Faults Stimulated by Fluid Injection

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Cappa, F.; Galis, M.; Mai, P. M.

    2015-12-01

    The assessment of earthquake hazard induced by fluid injection or withdrawal could be advanced by understanding what controls the maximum magnitude of induced seismicity (Mmax) and the conditions leading to aseismic instead of seismic slip. This is particularly critical for the viability of renewable energy extraction through engineered geothermal systems, which aim at enhancing permeability through controlled fault slip. Existing empirical relations and models for Mmax lack a link between rupture size and the characteristics of the triggering stress perturbation based on earthquake physics. We aim at filling this gap by extending results on the nucleation and arrest of dynamic rupture. We previously derived theoretical relations based on fracture mechanics between properties of overstressed nucleation regions (size, shape and overstress level), the ability of dynamic ruptures to either stop spontaneously or run away, and the final size of stopping ruptures. We verified these relations by comparison to 3D dynamic rupture simulations under slip-weakening friction and to laboratory experiments of frictional sliding nucleated by localized stresses. Here, we extend these results to the induced seismicity context by considering the effect of pressure perturbations resulting from fluid injection, evaluated by hydromechanical modeling. We address the following question: given the amplitude and spatial extent of a fluid pressure perturbation, background stress and fracture energy on a fault, does a nucleated rupture stop spontaneously at some distance from the pressure perturbation region or does it grow away until it reaches the limits of the fault? We present fracture mechanics predictions of the rupture arrest length in this context, and compare them to results of 3D dynamic rupture simulations. We also conduct a systematic study of the effect of localized fluid pressure perturbations on faults governed by rate-and-state friction. We investigate whether injection

  2. The middle-long term prediction of the February 3, 1996 Lijiang earthquake (M S=7) by the ``criterion of activity in quiescence''

    NASA Astrophysics Data System (ADS)

    Zeng-Jian, Guo; Bao-Yan, Qin

    2000-07-01

    Earthquake activities in history are characterized by active and quiet periods. In the quiet period, the place where earthquake M S≥6 occurred means more elastic energy store and speedy energy accumulation there. When an active period of big earthquake activity appeared in wide region, in the place where earthquake (M S≥6) occurred in the past quiet period, the big earthquake with magnitude of 7 or more often occur there. We call the above-mentioned judgement for predicting big earthquake the “criterion of activity in quiescence”. The criterion is relatively effective for predicting location of big earthquake. In general, error of predicting epicenter is no more than 100 km. According to the criterion, we made successfully a middle-term prediction on the 1996 Lijiang earthquake in Yunnan Province, the error of predicted location is about 50 km. Besides, the 1994 Taiwan strait earthquake (M S=7.3), the 1995 Yunnan-Myanmar boundary earthquake (M S=7.2) and the Mani earthquake (M S=7.9) in north Tibet are accordant with the retrospective predictions by the “criterion of activity in quiescence”. The windows of “activity in quiescence” identified statistically by us are 1940 1945, 1958 1961 and 1979 1986. Using the “criterion of activity in quiescence” to predict big earthquake in the mainland of China, the earthquake defined by “activity in quiescence” has magnitude of 6 or more; For the Himalayas seismic belt, the Pacific seismic belt and the north-west boundary seismic belt of Xinjiang, the earthquake defined by “activity in quiescence” has magnitude of 7, which is corresponding to earthquake with magnitude of much more than 7 in future. For the regions where there are not tectonically and historically a possibility of occurring big earthquake (M S=7), the criterion of activity in quiescence is not effective.

  3. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    SciTech Connect

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-06-20

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  4. The 2011 magnitude 9.0 Tohoku-Oki earthquake: mosaicking the megathrust from seconds to centuries.

    PubMed

    Simons, Mark; Minson, Sarah E; Sladen, Anthony; Ortega, Francisco; Jiang, Junle; Owen, Susan E; Meng, Lingsen; Ampuero, Jean-Paul; Wei, Shengji; Chu, Risheng; Helmberger, Donald V; Kanamori, Hiroo; Hetland, Eric; Moore, Angelyn W; Webb, Frank H

    2011-06-17

    Geophysical observations from the 2011 moment magnitude (M(w)) 9.0 Tohoku-Oki, Japan earthquake allow exploration of a rare large event along a subduction megathrust. Models for this event indicate that the distribution of coseismic fault slip exceeded 50 meters in places. Sources of high-frequency seismic waves delineate the edges of the deepest portions of coseismic slip and do not simply correlate with the locations of peak slip. Relative to the M(w) 8.8 2010 Maule, Chile earthquake, the Tohoku-Oki earthquake was deficient in high-frequency seismic radiation--a difference that we attribute to its relatively shallow depth. Estimates of total fault slip and surface secular strain accumulation on millennial time scales suggest the need to consider the potential for a future large earthquake just south of this event. PMID:21596953

  5. The magnitude of events following a strong earthquake: and a pattern recognition approach applied to Italian seismicity

    NASA Astrophysics Data System (ADS)

    Gentili, Stefania; Di Giovambattista, Rita

    2016-04-01

    In this study, we propose an analysis of the earthquake clusters occurred in Italy from 1980 to 2015. In particular, given a strong earthquake, we are interested to identify statistical clues to forecast whether a subsequent strong earthquake will follow. We apply a pattern recognition approach to verify the possible precursors of a following strong earthquake. Part of the analysis is based on the observation of the cluster during the first hours/days after the first large event. The features adopted are, among the others, the number of earthquakes, the radiated energy and the equivalent source area. The other part of the analysis is based on the characteristics of the first strong earthquake, like its magnitude, depth, focal mechanism, the tectonic position of the source zone. The location of the cluster inside the Italia territory is of particular interest. In order to characterize the precursors depending on the cluster type, we used decision trees as classifiers on single precursor separately. The performances of the classification are tested by leave-one-out method. The analysis is done using different time-spans after the first strong earthquake, in order to simulate the increase of information available as time passes during the seismic clusters. The performances are assessed in terms of precision, recall and goodness of the single classifiers and the ROC graph is shown.

  6. On the modified Mercalli intensities and magnitudes of the 1811-1812 New Madrid earthquakes

    USGS Publications Warehouse

    Hough, S.E.; Armbruster, J.G.; Seeber, L.; Hough, J.F.

    2000-01-01

    We reexamine original felt reports from the 1811-1812 New Madrid earthquakes and determine revised isoseismal maps for the three principal mainshocks. In many cases we interpret lower values than those assigned by earlier studies. In some cases the revisions result from an interpretation of original felt reports with an appreciation for site response issues. Additionally, earlier studies had assigned modified Mercalli intensity (MMI) values of V-VII to a substantial number of reports that we conclude do not describe damage commensurate with intensities this high. We investigate several approaches to contouring the MMI values using both analytical and subjective methods. For the first mainshock on 02:15 LT December 16, 1811, our preferred contouring yields M??7.2-7.3 using the area-moment regressions of Johnston [1996]. For the 08:00 LT on January 23, 1812, and 03:45 LT on February 7, 1812, mainshocks, we obtain M??7.0 and M??7.4-7.5, respectively. Our magnitude for the February mainshock is consistent with the established geometry of the Reelfoot fault, which all evidence suggests to have been the causative structure for this event. We note that the inference of lower magnitudes for the New Madrid events implies that site response plays a significant role in controlling seismic hazard at alluvial sites in the central and eastern United States. We also note that our results suggest that thrusting may have been the dominant mechanism of faulting associated with the 1811-1812 sequence. Copyright 2000 by the American Geophysical Union.

  7. Multidimensional earthquake frequency distributions consistent with self-organization of complex systems: The interdependence of magnitude, interevent time and interevent distance

    NASA Astrophysics Data System (ADS)

    Tzanis, A.; Vallianatos, F.

    2012-04-01

    the G-R law predicts, but also to the interevent time and distance by means of well defined power-laws. We also demonstrate that interevent time and distance are not independent of each other, but also interrelated by means of well defined power-laws. We argue that these relationships are universal and valid for both local and regional tectonic grains and seismicity patterns. Eventually, we argue that the four-dimensional hypercube formed by the joint distribution of earthquake frequency, magnitude, interevent time and interevent distance comprises a generalized distribution of the G-R type which epitomizes the temporal and spatial interdependence of earthquake activity, consistent with expectation for a stationary or evolutionary critical system. Finally, we attempt to discuss the emerging generalized frequency distribution in terms of non-extensive statistical physics. Acknowledgments. This work was partly supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project "Integrated understanding of Seismicity, using innovative methodologies of Fracture Mechanics along with Earthquake and Non-Extensive Statistical Physics - Application to the geodynamic system of the Hellenic Arc - SEISMO FEAR HELLARC".

  8. Analysis of seismic magnitude differentials ( m b- M w) across megathrust faults in the vicinity of recent great earthquakes

    NASA Astrophysics Data System (ADS)

    Rushing, Teresa M.; Lay, Thorne

    2012-12-01

    Spatial variations in underthrusting earthquake seismic magnitude differentials ( m b - M w) are examined for plate boundary megathrusts in the vicinity of the 26 December 2004 Sumatra-Andaman ( M w 9.2), 2010 Maule, Chile ( M w 8.8), and 11 March 2011 Tohoku, Japan ( M w 9.1) great earthquakes. The magnitude differentials, corrected for ω-squared source spectrum dependence on seismic moment, provide a first-order probe of spatial variations of frequency-dependent seismic radiation. This is motivated by observations that the three great earthquakes all have coherent short-period radiation from the down-dip portions of their ruptures as imaged through back-projections, but little coherent short-period energy from shallower regions where large coseismic slip occurred. While there is substantial scatter in the magnitude measures, all three regions display some increase in relative strength of short-period seismic waves with depth, with the pattern being strongest for Sumatra and Japan where the deeper portion of the seismogenic zone is below the overriding crust. Other regions such as the Kuril Islands, Aleutians, Peru, and Southern Sumatra/Sumba show little, if any, depth pattern in the magnitude differentials. Variation in material and frictional properties over particularly wide seismogenic megathrusts likely produce the depth-dependence observed in both m b- M w residuals and great earthquake seismic radiation.

  9. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake.

    PubMed

    Bassett, Dan; Sandwell, David T; Fialko, Yuri; Watts, Anthony B

    2016-03-01

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan. PMID:26935698

  10. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Bassett, Dan; Sandwell, David T.; Fialko, Yuri; Watts, Anthony B.

    2016-03-01

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan.

  11. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake.

    PubMed

    Bassett, Dan; Sandwell, David T; Fialko, Yuri; Watts, Anthony B

    2016-03-01

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan.

  12. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  13. Earthquake Early Warning of East China Sea: the study on the experimental relationship between the Predominant Period and the Magnitude

    NASA Astrophysics Data System (ADS)

    Li, Y.; Xue, M.; Ren, Y.; Zhu, A.

    2011-12-01

    Based on seismic hazards analysis in Huadong region (include Shanghai, Zhejiang province, and Jiangsu province), it's necessary to build an earthquake early warning systems in this region. This becomes even more urgent because of the fast developments of High-Speed Trains surrounding the area. In this study, we use the historical earthquake data to build an experimental relationship between the Predominant Period and the Magnitude for Huadong region. We combined ISC earthquake catalogue with the catalogue from Huadong sub-network of Chinese Digital Seismic Network from 1999 to 2008. We manually examined all the local and regional events and selected a total of 117 earthquakes with high signal to noise ratios. Based on the analysis of seismic data from station SSE, we investigate the influence of different parameters such as the eigenfunction, the length of STA/LTA window, and the threshold on P-wave auto triggering and determine the appropriate values of these parameters for Huadong region. By testing different values of the parameters, we obtain different linear relationships between the predominant period and the magnitude; then we determine the optimal set of parameters through error analysis for magnitude estimation. A good linear fitting between the predominant period and magnitudes is obtained. In addition, we use the average predominant period instead of the predominant period of the first few seconds to build the experimental relationship between the average predominant period and the magnitude. Compared with that of using the predominant period directly, the linear fitting using the average predominant period reveals a relatively better relevancy between periods and magnitudes and a better error distribution in magnitude estimating.

  14. An updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Chen, Kuei-Pao; Tsai, Yi-Ben

    2016-03-01

    The main goal of this study was to develop an updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes that are compatible with the Harvard M w . We hope that such a catalog of earthquakes will provide a fundamental database for definitive studies of the distribution of earthquakes in Taiwan as a function of space, time, and magnitude, as well as for realistic assessments of seismic hazards in Taiwan. In this study, for completeness and consistency, we start with a previously published catalog of earthquakes from 1900 to 2006 with homogenized M w magnitudes. We update the earthquake data through 2014 and supplement the database with 188 additional events for the time period of 1900-1935 that were found in the literature. The additional data resulted in a lower magnitude from M w 5.5-5.0. The broadband-based Harvard M w , United States Geological Survey (USGS) M, and Broadband Array in Taiwan for Seismology (BATS) M w are preferred in this study. Accordingly, we use empirical relationships with the Harvard M w to transform our old converted M w values to new converted M w values and to transform the original BATS M w values to converted BATS M w values. For individual events, the adopted M w is chosen in the following order: Harvard M w > USGS M > converted BATS M w > new converted M w . Finally, we discover that use of the adopted M w removes a data gap at magnitudes greater than or equal to 5.0 in the original catalog during 1985-1991. The new catalog is now complete for M w ≥ 5.0 and significantly improves the quality of data for definitive study of seismicity patterns, as well as for realistic assessment of seismic hazards in Taiwan.

  15. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of

  16. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of

  17. vS30, κ, regional attenuation and Mw from accelerograms: application to magnitude 3-5 French earthquakes

    NASA Astrophysics Data System (ADS)

    Drouet, Stéphane; Cotton, Fabrice; Guéguen, Philippe

    2010-08-01

    We investigate recordings from weak to moderate earthquakes, with magnitudes ranging between about 3 and 5, recorded by the French Accelerometric Network. S-wave spectra are modelled as a product of source, propagation and site terms. Inverting large data sets of multiple earthquakes recorded at multiple stations allows us to separate the three contributions. Source parameters such as moment magnitude, corner frequency and stress drop are estimated for each earthquake. We provide the first complete and homogeneous catalogue of moment magnitudes for France, for the events with magnitude greater than 3 that occurred between 1996 and 2006. Stress drops are found to be regionally dependent as well as magnitude dependent, and range from about 0.1 MPa (1 bar) to about 30 MPa (300 bars). The attenuation parameters show that, in France on a nationwide scale, variations of attenuation properties do exist. Site transfer functions are also computed, giving the level of amplification at different frequencies with respect to the response of a generic rock site (characterized by an average 30 m S-wave velocity, vs30, of 2000ms-1). From these site terms, we compute the high-frequency fall-off parameter κ [modelled as exp (-πκf), with f the frequency] for 76 stations. We also determine rock stations vs30 and we show the κ-vs30 relationship for 21 rock stations.

  18. Time-predictable model applicability for earthquake occurrence in northeast India and vicinity

    NASA Astrophysics Data System (ADS)

    Panthi, A.; Shanker, D.; Singh, H. N.; Kumar, A.; Paudyal, H.

    2011-03-01

    Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986) for the period 1906-2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp) and not on the following mainshock (Mf). This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T) of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.

  19. Spectral P-wave magnitudes, magnitude spectra and other source parameters for the 1990 southern Sudan and the 2005 Lake Tanganyika earthquakes

    NASA Astrophysics Data System (ADS)

    Moussa, Hesham Hussein Mohamed

    2008-10-01

    Teleseismic Broadband seismograms of P-waves from the May 1990 southern Sudan and the December, 2005 Lake Tanganyika earthquakes; the western branch of the East African Rift System at different azimuths have been investigated on the basis of magnitude spectra. The two earthquakes are the largest shocks in the East African Rift System and its extension in southern Sudan. Focal mechanism solutions along with geological evidences suggest that the first event represents a complex style of the deformation at the intersection of the northern branch of the western branch of the East African Rift and Aswa Shear Zone while the second one represents the current tensional stress on the East African Rift. The maximum average spectral magnitude for the first event is determined to be 6.79 at 4 s period compared to 6.33 at 4 s period for the second event. The other source parameters for the two earthquakes were also estimated. The first event had a seismic moment over fourth that of the second one. The two events are radiated from patches of faults having radii of 13.05 and 7.85 km, respectively. The average displacement and stress drop are estimated to be 0.56 m and 1.65 MPa for the first event and 0.43 m and 2.20 MPa for the second one. The source parameters that describe inhomogeneity of the fault are also determined from the magnitude spectra. These additional parameters are complexity, asperity radius, displacements across the asperity and ambient stress drop. Both events produce moderate rupture complexity. Compared to the second event, the first event is characterized by relatively higher complexity, a low average stress drop and a high ambient stress. A reasonable explanation for the variations in these parameters may suggest variation in the strength of the seismogenic fault which provides the relations between the different source parameters. The values of stress drops and the ambient stresses estimated for both events indicate that these earthquakes are of interplate

  20. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  1. Giant seismites and megablock uplift in the East African Rift: evidence for Late Pleistocene large magnitude earthquakes.

    PubMed

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions.

  2. Giant Seismites and Megablock Uplift in the East African Rift: Evidence for Late Pleistocene Large Magnitude Earthquakes

    PubMed Central

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M.

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic ‘megablock complex’ that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601

  3. Giant seismites and megablock uplift in the East African Rift: evidence for Late Pleistocene large magnitude earthquakes.

    PubMed

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601

  4. 76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ... briefings on lessons learned from the 2010 Chile and 2011 Japan subduction earthquakes, monitoring and....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey. ACTION: Notice of Meeting. SUMMARY: Pursuant to Public Law 96-472, the National...

  5. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  6. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  7. Geotechnical effects of the 2015 magnitude 7.8 Gorkha, Nepal, earthquake and aftershocks

    USGS Publications Warehouse

    Moss, Robb E S; Thompson, Eric; Kieffer, D Scott; Tiwari, Binod; Hashash, Youssef M A; Acharya, Indra; Adhikari, Basanta; Asimaki, Domniki; Clahan, Kevin B.; Collins, Brian D.; Dahal, Sachindra; Jibson, Randall W.; Khadka, Diwakar; Macdonald, Amy; Madugo, Chris L M; Mason, H Benjamin; Pehlivan, Menzer; Rayamajhi, Deepak; Uprety, Sital

    2015-01-01

    This article summarizes the geotechnical effects of the 25 April 2015 M 7.8 Gorkha, Nepal, earthquake and aftershocks, as documented by a reconnaissance team that undertook a broad engineering and scientific assessment of the damage and collected perishable data for future analysis. Brief descriptions are provided of ground shaking, surface fault rupture, landsliding, soil failure, and infrastructure performance. The goal of this reconnaissance effort, led by Geotechnical Extreme Events Reconnaissance, is to learn from earthquakes and mitigate hazards in future earthquakes.

  8. Earthquake prediction: The interaction of public policy and science

    USGS Publications Warehouse

    Jones, L.M.

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

  9. Predictability of Great Earthquakes: The 25 April 2015 M7.9 Gorkha (Nepal)

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2015-12-01

    Understanding of seismic process in terms of non-linear dynamics of a hierarchical system of blocks-and-faults and deterministic chaos, has already led to reproducible intermediate-term middle-range prediction of the great and significant earthquakes. The technique based on monitoring charcteristics of seismic static in an area proportional to source size of incipient earthquake is confirmed at the confidence level above 99% by statistics of Global Testing in forward application from 1992 to the present. The semi-annual predictions determined for the next half-year by the algorithm M8 aimed (i) at magnitude 8+ earthquakes in 262 circles of investigation, CI's, each of 667-km radius and (ii) at magnitude 7.5+ earthquakes in 180 CI's, each of 427-km radius are communicated each January and July to the Global Test Observers (about 150 today). The pre-fixed location of CI's cover all seismic regions where the M8 algorithm could run in its original version that requires annual rate of activity of 16 or more main shocks. According to predictions released in January 2015 for the first half of 2015, the 25 April 2015 Nepal MwGCMT = 7.9 earthquake falls outside the Test area for M7.5+, while its epicenter is within the accuracy limits of the alarm area for M8.0+ that spread along 1300 km of Himalayas. We note that (i) the earthquake confirms identification of areas prone to strong earthquakes in Himalayas by pattern recognition (Bhatia et al. 1992) and (ii) it would have been predicted by the modified version of the M8 algorithm aimed at M7.5+. The modified version is adjusted to a low level of earthquake detection, about 10 main shocks per year, and is tested successfully by Mojarab et al. (2015) in application to the recent earthquakes in Eastern Anatolia (23 October 2011, M7.3 Van earthquake) and Iranian Plateau (16 April 2013, M7.7 Saravan and the 24 September 2013, M7.7 Awaran earthquakes).

  10. Earthquake ground-motion prediction equations for eastern North America

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    2006-01-01

    New earthquake ground-motion relations for hard-rock and soil sites in eastern North America (ENA), including estimates of their aleatory uncertainty (variability) have been developed based on a stochastic finite-fault model. The model incorporates new information obtained from ENA seismographic data gathered over the past 10 years, including three-component broadband data that provide new information on ENA source and path effects. Our new prediction equations are similar to the previous ground-motion prediction equations of Atkinson and Boore (1995), which were based on a stochastic point-source model. The main difference is that high-frequency amplitudes (f ??? 5 Hz) are less than previously predicted (by about a factor of 1.6 within 100 km), because of a slightly lower average stress parameter (140 bars versus 180 bars) and a steeper near-source attenuation. At frequencies less than 5 Hz, the predicted ground motions from the new equations are generally within 25% of those predicted by Atkinson and Boore (1995). The prediction equations agree well with available ENA ground-motion data as evidenced by near-zero average residuals (within a factor of 1.2) for all frequencies, and the lack of any significant residual trends with distance. However, there is a tendency to positive residuals for moderate events at high frequencies in the distance range from 30 to 100 km (by as much as a factor of 2). This indicates epistemic uncertainty in the prediction model. The positive residuals for moderate events at < 100 km could be eliminated by an increased stress parameter, at the cost of producing negative residuals in other magnitude-distance ranges; adjustment factors to the equations are provided that may be used to model this effect.

  11. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    NASA Astrophysics Data System (ADS)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  12. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  13. Evaluation of the seismic-window theory for earthquake prediction

    SciTech Connect

    McNutt, M.; Heaton, T.H.

    1981-01-01

    The intent of this study was to determine whether earthquakes in the San Francisco Bay area respond to a fortnightly fluctuation in tidal amplitude. A correlation between seismic events and any tidal period would contribute to the understanding of the earthquake process and the stress regime within which faulting occurs. Correlations between the longer tidal periods and seismicity might provide a useful tool for earthquake prediction. The US Geological Survey (USGS) performed a computer evaluation of this prediction method. This article reports the results of the USGS study.

  14. Aftershocks Prediction In Italy: Estimation of Time-magnitude Distribution Model Parameters and Computation of Probabilities of Occurrence.

    NASA Astrophysics Data System (ADS)

    Lolli, B.; Gasperini, P.

    We analyzed the available instrumental catalogs of Italian earthquakes from 1960 to 1996 to compute the parameters of the time-magnitude distribution model proposed by Reasenberg e Jones (1989, 1994) and currently used to make aftershock predictions in California. We found that empirical corrections ranging from 0.3 (before 1976) to 0.5 magnitude units (between 1976 and 1980) are necessary to make the dataset ho- mogeneous over the entire period. The estimated model parameters result quite stable with respect to mainshock magnitude and sequence detection algorithm, while their spatial variations suggest that regional estimates might predict the behavior of future sequences better than ones computed by the whole Italian dataset. In order to improve the goodness of fit for sequences including multiple mainshocks (like the one occurred in Central Italy from September 1997 to May 1998) we developed a quasi epidemic model (QETAS) consisting of the superposition of a small number of Omori's pro- cesses originated by strong aftershocks. We found that the inclusion in the QETAS model of the shocks with magnitude larger than mainshock magnitude minus one (that are usually located and sized in near real-time by the observatories) improves significantly the ability of the algorithm to predict the sequence behaviors.

  15. Multiscale approach to the predictability of earthquakes and of synthetic SOC sequences

    NASA Astrophysics Data System (ADS)

    Peresan, A.; Panza, G. F.

    2003-04-01

    The power-law scaling expressed by the Gutenberg-Richter (GR) law is the main argument in favour of the Self-Organised Criticality (SOC) of seismic phenomena. Nevertheless the limits of validity of the GR law and the phenomenology reproduced by the SOC models, as well as their consequences for earthquake predictability, still remain quite undefined. According to the Multiscale Seismicity (MS) model, the GR law describes adequately only the ensemble of earthquakes that are geometrically small with respect to the dimensions of the analysed region. The MS model and its implications for intermediate-term medium-range earthquake predictions are thus examined, considering both the seismicity observed in the Italian territory and the synthetic sequences of events generated by a SOC model. The predictability of the large events is evaluated by means of the algorithms CN and M8, based on a quantitative analysis of the seismic flow within a delimited region, which allow for the prediction of the earthquakes with magnitude greater than a fixed threshold Mo. Considering the application of CN and M8 to the Italian territory, we show that, in agreement with the MS model, these algorithms make use of the information carried by small and moderate earthquakes, following the GR law, to predict the strong earthquakes, which are infrequent and often arbitrarily considered characteristic events inside the regions delimited for prediction purposes. Similarly, the application of the algorithm CN for the prediction of the largest events in the synthetic SOC sequences, indicates that a certain predictability can be attained, when the MS model is taken into account. These results suggest that the similarity between the seismic flow and the SOC sequences goes beyond the average features of scale-invariance. In fact, while the GR law describes an average feature of seismicity, CN algorithm is checking for the deviations from such trend, which may characterise the sequence of events before the

  16. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Rizza, M.; Ritz, J.-F.; Braucher, R.; Vassallo, R.; Prentice, C.; Mahan, S.; McGill, S.; Chauvet, A.; Marco, S.; Todbileg, M.; Demberel, S.; Bourles, D.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans-particularly well preserved in the arid environment of the Gobi region-allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is ~1 mm yr-1 along the WIB and EIB segments and ~0.5 mm yr-1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78-7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of ~2500-5200 yr for past

  17. Incorporating Love- and Rayleigh-wave magnitudes, unequal earthquake and explosion variance assumptions and interstation complexity for improved event screening

    SciTech Connect

    Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia; Shumway, Robert

    2009-01-01

    Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleigh M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the

  18. Predicted magnitudes and colors from cool-star model atmospheres

    NASA Astrophysics Data System (ADS)

    Johnson, H. R.; Steiman-Cameron, T. Y.

    1982-02-01

    An intercomparison of model stellar atmospheres and observations of real stars can lead to a better understanding of the relationship between the physical properties of stars and their observed radiative flux. In this spirit we have determined wide-band and narrow-band magnitudes and colors for a subset of models of K and M giant and supergiant stars selected from the grid of 40 models by Johnson, Bernat and Krupp (1980) (hereafter referred to as JBK). The 24 models selected have effective temperatures of 4000, 3800, 3600, 3400, 3200, 3000, 2750 and 2500 K and log g = 0, 1 or 2. Emergent energy fluxes (erg/ sq cm s A) were calculated at 9140 wavelengths for each model. These computed flux curves were folded through the transmission functions of Wing's 8-color system (Wing, 1971; White and Wing, 1978) and through Johnson's (1965) wide-band (BVRIJKLM) system. The calibration of the resultant magnitudes was made by using the absolute calibration of the flux curve of Vega by Schild, et al. (1971).

  19. Predicted magnitudes and colors from cool-star model atmospheres

    NASA Technical Reports Server (NTRS)

    Johnson, H. R.; Steiman-Cameron, T. Y.

    1981-01-01

    An intercomparison of model stellar atmospheres and observations of real stars can lead to a better understanding of the relationship between the physical properties of stars and their observed radiative flux. In this spirit we have determined wide-band and narrow-band magnitudes and colors for a subset of models of K and M giant and supergiant stars selected from the grid of 40 models by Johnson, Bernat and Krupp (1980) (hereafter referred to as JBK). The 24 models selected have effective temperatures of 4000, 3800, 3600, 3400, 3200, 3000, 2750 and 2500 K and log g = 0, 1 or 2. Emergent energy fluxes (erg/ sq cm s A) were calculated at 9140 wavelengths for each model. These computed flux curves were folded through the transmission functions of Wing's 8-color system (Wing, 1971; White and Wing, 1978) and through Johnson's (1965) wide-band (BVRIJKLM) system. The calibration of the resultant magnitudes was made by using the absolute calibration of the flux curve of Vega by Schild, et al. (1971).

  20. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  1. Seismically active area monitoring by robust TIR satellite techniques: a sensitivity analysis on low magnitude earthquakes in Greece and Turkey

    NASA Astrophysics Data System (ADS)

    Corrado, R.; Caputo, R.; Filizzola, C.; Pergola, N.; Pietrapertosa, C.; Tramutoli, V.

    2005-01-01

    Space-time TIR anomalies, observed from months to weeks before earthquake occurrence, have been suggested by several authors as pre-seismic signals. Up to now, such a claimed connection of TIR emission with seismic activity has been considered with some caution by scientific community mainly for the insufficiency of the validation data-sets and the scarce importance attached by those authors to other causes (e.g. meteorological) that, rather than seismic activity, could be responsible for the observed TIR signal fluctuations. A robust satellite data analysis technique (RAT) has been recently proposed which, thanks to a well-founded definition of TIR anomaly, seems to be able to identify anomalous space-time TIR signal transients even in very variable observational (satellite view angle, land topography and coverage, etc.) and natural (e.g. meteorological) conditions. Its possible application to satellite TIR surveys in seismically active regions has been already tested in the case of several earthquakes (Irpinia: 23 November 1980, Athens: 7 September 1999, Izmit: 17 August 1999) of magnitude higher than 5.5 by using a validation/confutation approach, devoted to verify the presence/absence of anomalous space-time TIR transients in the presence/absence of seismic activity. In these cases, a magnitude threshold (generally M<5) was arbitrarily chosen in order to identify seismically unperturbed periods for confutation purposes. In this work, 9 medium-low magnitude (4earthquakes which occurred in Greece and Turkey have been analyzed in order to verify if, even in these cases, anomalous TIR transients can be observed. The analysis, which was performed using 8 years of Meteosat TIR observations, demonstrated that anomalous TIR transients can be observed even in the presence of medium-low magnitude earthquakes (4earthquake occurrence is concerned, such a result suggests

  2. The marine-geological fingerprint of the 2011 Magnitude 9 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Strasser, M.; Ikehara, K.; Usami, K.; Kanamatsu, T.; McHugh, C. M.

    2015-12-01

    The 2011 Tohoku-oki earthquake was the first great subduction zone earthquake, for which the entire activity was recorded by offshore geophysical, seismological and geodetic instruments and for which direct observation for sediment re-suspension and re-deposition was documented across the entire margin. Furthermore, the resulting tsunami and subsequent tragic incident at Fukushima nuclear power station, has induced short-lived radionuclides which can be used for tracer experiments in the natural offshore sedimentary systems. Here we present a summary on the present knowledge on the 2011 event beds in the offshore environment and integrate data from offshore instruments with sedimentological, geochemical and physical property data on core samples to report various types of event deposits resulting from earthquake-triggered submarine landslides, downslope sediment transport by turbidity currents, surficial sediment remobilization from the agitation and resuspension of unconsolidated surface sediments by the earthquake ground motion, as well as tsunami-induced sediment transport from shallow waters to the deep sea. The rapidly growing data set from offshore Tohoku further allows for discussion about (i) what we can learn from this well-documented event for general submarine paleoseismology aspects and (ii) potential of the Japan Trench to use the geological record of the Japan Trench to reconstruct a long-term history of great subduction zone earthquakes.

  3. A Magnitude 7.1 Earthquake in the Tacoma Fault Zone-A Plausible Scenario for the Southern Puget Sound Region, Washington

    USGS Publications Warehouse

    Gomberg, Joan; Sherrod, Brian; Weaver, Craig; Frankel, Art

    2010-01-01

    The U.S. Geological Survey and cooperating scientists have recently assessed the effects of a magnitude 7.1 earthquake on the Tacoma Fault Zone in Pierce County, Washington. A quake of comparable magnitude struck the southern Puget Sound region about 1,100 years ago, and similar earthquakes are almost certain to occur in the future. The region is now home to hundreds of thousands of people, who would be at risk from the shaking, liquefaction, landsliding, and tsunamis caused by such an earthquake. The modeled effects of this scenario earthquake will help emergency planners and residents of the region prepare for future quakes.

  4. Scientific goals of the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Thatcher, W.

    1988-01-01

    Several unique circumstances of the Parkfield experiment provide unprecedented opportunities for significant advances in understanding the mechanics of earthquakes. to our knowledge, there is no other seismic zone anywhere where the time, place, and magnitude of an impending earthquake are specified as precisely. Moreover, the epicentral region is located on continental crust, is readily accessible, and can support a range of dense monitoring networks that are sited either on or very close to the expected rupture surface. As a result, the networks located at Parkfield are several orders of magnitude more sensitive than any previously deployed for monitoring earthquake precursors (a preearthquake change in strain, seismicity, and other geophysical parameters). In this respect the design of the Parkfield experiment resembles the rationale for constructing a new, more powerful nuclear particle accelerator:in both cases increased capabilities will test existing theories, reveal new phenomena, and suggest new research directions. 

  5. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  6. Strong ground motion prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Z. R.; Tao, X. X.; Cui, A. P.

    2015-09-01

    For regions lack of strong ground motion records, a method is developed to predict strong ground motion by small earthquake records from local broadband digital earthquake networks. Sichuan and Yunnan regions, located in southwestern China, are selected as the targets. Five regional source and crustal medium parameters are inversed by micro-Genetic Algorithm. These parameters are adopted to predict strong ground motion for moment magnitude (Mw) 5.0, 6.0 and 7.0. Strong ground motion data are compared with the results, most of the result pass through ideally the data point plexus, except the case of Mw 7.0 in Sichuan region, which shows an obvious slow attenuation. For further application, this result is adopted in probability seismic hazard assessment (PSHA) and near-field strong ground motion synthesis of the Wenchuan Earthquake.

  7. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Prentice, Carol S.; Rizza, M.; Ritz, J.F.; Baucher, R.; Vassallo, R.; Mahan, S.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans—particularly well preserved in the arid environment of the Gobi region—allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is∼1 mm yr–1 along the WIB and EIB segments and∼0.5 mm yr–1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78–7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of∼2500

  8. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  9. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  10. Earthquake prediction rumors can help in building earthquake awareness: the case of May the 11th 2011 in Rome (Italy)

    NASA Astrophysics Data System (ADS)

    Amato, A.; Arcoraci, L.; Casarotti, E.; Cultrera, G.; Di Stefano, R.; Margheriti, L.; Nostro, C.; Selvaggi, G.; May-11 Team

    2012-04-01

    Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquake prediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquake prediction credibility. Given the echo of this earthquake prediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

  11. Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults

    NASA Astrophysics Data System (ADS)

    McGuire, J. J.; Boettcher, M. S.; Jordan, T. H.

    2004-12-01

    A predominant view of continental seismicity postulates that all earthquakes initiate in a similar manner regardless of their eventual size and that earthquake triggering can be described by an Endemic Type Aftershock Sequence (ETAS) model [e.g. Ogata, 1988, Helmstetter and Sorenette 2002]. These null hypotheses cannot be rejected as an explanation for the relative abundances of foreshocks and aftershocks to large earthquakes in California [Helmstetter et al., 2003]. An alternative location for testing this hypothesis is mid-ocean ridge transform faults (RTFs), which have many properties that are distinct from continental transform faults: most plate motion is accommodated aseismically, many large earthquakes are slow events enriched in low-frequency radiation, and the seismicity shows depleted aftershock sequences and high foreshock activity. Here we use the 1996-2001 NOAA-PMEL hydroacoustic seismicity catalog for equatorial East Pacific Rise transform faults to show that the foreshock/aftershock ratio is two orders of magnitude greater than the ETAS prediction based on global RTF aftershock abundances. We can thus reject the null hypothesis that there is no fundamental distinction between foreshocks, mainshocks, and aftershocks on RTFs. We further demonstrate (retrospectively) that foreshock sequences on East Pacific Rise transform faults can be used to achieve statistically significant short-term prediction of large earthquakes (magnitude ≥ 5.4) with good spatial (15-km) and temporal (1-hr) resolution using the NOAA-PMEL catalogs. Our very simplistic approach produces a large number of false alarms, but it successfully predicts the majority (70%) of M≥5.4 earthquakes while covering only a tiny fraction (0.15%) of the total potential space-time volume with alarms. Therefore, it achieves a large probability gain (about a factor of 500) over random guessing, despite not using any near field data. The predictability of large EPR transform earthquakes suggests

  12. Discrimination of DPRK M5.1 February 12th, 2013 Earthquake as Nuclear Test Using Analysis of Magnitude, Rupture Duration and Ratio of Seismic Energy and Moment

    NASA Astrophysics Data System (ADS)

    Salomo Sianipar, Dimas; Subakti, Hendri; Pribadi, Sugeng

    2015-04-01

    On February 12th, 2013 morning at 02:57 UTC, there had been an earthquake with its epicenter in the region of North Korea precisely around Sungjibaegam Mountains. Monitoring stations of the Preparatory Commission for the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO) and some other seismic network detected this shallow seismic event. Analyzing seismograms recorded after this event can discriminate between a natural earthquake or an explosion. Zhao et. al. (2014) have been successfully discriminate this seismic event of North Korea nuclear test 2013 from ordinary earthquakes based on network P/S spectral ratios using broadband regional seismic data recorded in China, South Korea and Japan. The P/S-type spectral ratios were powerful discriminants to separate explosions from earthquake (Zhao et. al., 2014). Pribadi et. al. (2014) have characterized 27 earthquake-generated tsunamis (tsunamigenic earthquake or tsunami earthquake) from 1991 to 2012 in Indonesia using W-phase inversion analysis, the ratio between the seismic energy (E) and the seismic moment (Mo), the moment magnitude (Mw), the rupture duration (To) and the distance of the hypocenter to the trench. Some of this method was also used by us to characterize the nuclear test earthquake. We discriminate this DPRK M5.1 February 12th, 2013 earthquake from a natural earthquake using analysis magnitude mb, ms and mw, ratio of seismic energy and moment and rupture duration. We used the waveform data of the seismicity on the scope region in radius 5 degrees from the DPRK M5.1 February 12th, 2013 epicenter 41.29, 129.07 (Zhang and Wen, 2013) from 2006 to 2014 with magnitude M ≥ 4.0. We conclude that this earthquake was a shallow seismic event with explosion characteristics and can be discriminate from a natural or tectonic earthquake. Keywords: North Korean nuclear test, magnitude mb, ms, mw, ratio between seismic energy and moment, ruptures duration

  13. Imaging of the Rupture Zone of the Magnitude 6.2 Karonga Earthquake of 2009 using Electrical Resistivity Surveys

    NASA Astrophysics Data System (ADS)

    Clappe, B.; Hull, C. D.; Dawson, S.; Johnson, T.; Laó-Dávila, D. A.; Abdelsalam, M. G.; Chindandali, P. R. N.; Nyalugwe, V.; Atekwana, E. A.; Salima, J.

    2015-12-01

    The 2009 Karonga earthquakes occurred in an area where active faults had not previously been known to exist. Over 5000 buildings were destroyed in the area and at least 4 people lost their lives as a direct result of the 19th of December magnitude 6.2 earthquake. The earthquake swarms occurred in the hanging wall of the main Livingstone border fault along segmented, west dipping faults that are synthetic to the Livingstone fault. The faults have a general trend of 290-350 degrees. Electrical resistivity surveys were conducted to investigate the nature of known rupture and seismogenic zones that resulted from the 2009 earthquakes in the Karonga, Malawi area. The goal of this study was to produce high-resolution images below the epicenter and nearby areas of liquefaction to determine changes in conductivity/resistivity signatures in the subsurface. An Iris Syscal Pro was utilized to conduct dipole-dipole resistivity measurements below the surface of soil at farmlands at 6 locations. Each transect was 710 meters long and had an electrode spacing of 10 meters. RES2DINV software was used to create 2-D inversion images of the rupture and seismogenic zones. We were able to observe three distinct geoelectrical layers to the north of the rupture zone and two south of the rupture zone with the discontinuity between the two marked by the location of the surface rupture. The rupture zone is characterized by ~80-meter wide area of enhanced conductivity, 5 m thick underlain by a more resistive layer dipping west. We interpret this to be the result of fine grain sands and silts brought up from depth to near surface as a result of shearing along the fault rupture or liquefaction. Electrical resistivity surveys are valuable, yet under-utilized tools for imaging near-surface effects of earthquakes.

  14. The 7.2 magnitude earthquake, November 1975, Island of Hawaii

    USGS Publications Warehouse

    1976-01-01

    It was centered about 5 km beneath the Kalapana area on the southeastern coast of Hawaii, the largest island of the Hawaiian chain (Fig. 1) and was preceded by numerous foreshocks. The event was accompanied, or followed shortly, by a tsunami, large-scale ground movemtns, hundreds of aftershocks, an eruption in the summit caldera of Kilauea Volcano. The earthquake and the tsunami it generated produced about 4.1 million dollars in property damage, and the tsumani caused two deaths. Although we have some preliminary findings about the cause and effects of the earthquake, detailed scientific investigations will take many more months to complete. This article is condensed from a recent preliminary report (Tillings an others 1976)

  15. Coseismic and postseismic velocity changes detected by Passive Image Interferometry: Comparison of five strong earthquakes (magnitudes 6.6 - 6.9) and one great earthquake (magnitude 9.0) in Japan

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Wegler, Ulrich; Shiomi, Katsuhiko; Nakahara, Hisashi

    2015-04-01

    We analyzed ambient seismic noise near five strong onshore crustal earthquakes in Japan as well as for the great Tohoku offshore earthquake. Green's functions were computed for station pairs (cross-correlations) as well as for different components of a single station (single-station cross-correlations) using a filter bank of five different bandpass filters between 0.125 Hz and 4 Hz. Noise correlations for different time periods were treated as repeated measurements and coda wave interferometry was applied to estimate coseismic as well as postseismic velocity changes. We used all possible component combinations and analyzed periods from a minimum of 3.5 years (Iwate region) up to 8.25 years (Niigata region). Generally, the single-station cross-correlation and station pair cross-correlation show similar results, but the single station method is more reliable for higher frequencies (f > 0.5 Hz), whereas the station pair method is more reliable for lower frequencies (f < 0.5 Hz). For all six earthquakes we found a similar behavior of the velocity change curve as a function of time. We observe coseismic velocity drops at the times of the respective earthquakes followed by postseismic recovery for all earthquakes. Additionally, most stations show a seasonal velocity variation. This seasonal variation was removed by a curve fitting and velocity changes of tectonic origin only were analyzed in our study. The postseismic velocity changes can be described by an exponential recovery model, where for all areas about half of the coseismic velocity drops recover on a time scale of the order of half a year. The other half of the coseismic velocity drops remain as a permanent change. The coseismic velocity drops are stronger at larger frequencies for all earthquakes. We assume that these changes are concentrated in the superficial layers but for some stations can also reach a few kilometers of depth. The coseismic velocity drops for the strong earthquakes (magnitudes 6.6 - 6

  16. Predicting earthquakes by analyzing accelerating precursory seismic activity

    USGS Publications Warehouse

    Varnes, D.J.

    1989-01-01

    During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

  17. Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults.

    PubMed

    McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H

    2005-03-24

    East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion. PMID:15791246

  18. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  19. Application of decision trees to the analysis of soil radon data for earthquake prediction.

    PubMed

    Zmazek, B; Todorovski, L; Dzeroski, S; Vaupotic, J; Kobal, I

    2003-06-01

    Different regression methods have been used to predict radon concentration in soil gas on the basis of environmental data, i.e. barometric pressure, soil temperature, air temperature and rainfall. Analyses of the radon data from three stations in the Krsko basin, Slovenia, have shown that model trees outperform other regression methods. A model has been built which predicts radon concentration with a correlation of 0.8, provided it is influenced only by the environmental parameters. In periods with seismic activity this correlation is much lower. This decrease in predictive accuracy appears 1-7 days before earthquakes with local magnitude 0.8-3.3.

  20. Prediction of modified Mercalli intensity from PGA, PGV, moment magnitude, and epicentral distance using several nonlinear statistical algorithms

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Hurtado, Jorge E.; Bedoya-Ruíz, Daniel Alveiro

    2012-07-01

    Despite technological advances in seismic instrumentation, the assessment of the intensity of an earthquake using an observational scale as given, for example, by the modified Mercalli intensity scale is highly useful for practical purposes. In order to link the qualitative numbers extracted from the acceleration record of an earthquake and other instrumental data such as peak ground velocity, epicentral distance, and moment magnitude on the one hand and the modified Mercalli intensity scale on the other, simple statistical regression has been generally employed. In this paper, we will employ three methods of nonlinear regression, namely support vector regression, multilayer perceptrons, and genetic programming in order to find a functional dependence between the instrumental records and the modified Mercalli intensity scale. The proposed methods predict the intensity of an earthquake while dealing with nonlinearity and the noise inherent to the data. The nonlinear regressions with good estimation results have been performed using the "Did You Feel It?" database of the US Geological Survey and the database of the Center for Engineering Strong Motion Data for the California region.

  1. Sun-earth environment study to understand earthquake prediction

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.

    2007-05-01

    Earthquake prediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquake prediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

  2. Paleomagnetic Definition of Crustal Segmentation, Quaternary Block Rotations and Limits on Earthquake Magnitudes in Northwestern Metropolitan Los Angeles

    NASA Astrophysics Data System (ADS)

    Levi, S.; Yeats, R. S.; Nabelek, J.

    2004-12-01

    Paleomagnetic studies of the Pliocene-Quaternary Saugus Formation, in the San Fernando Valley and east Ventura Basin, show that the crust is segmented into small domains, 10-20 km in linear dimension, identified by rotation of reverse-fault blocks. Two domains, southwest and adjacent to the San Gabriel fault, are rotated clockwise: 1) The Magic Mountain domain, 30 +/- 5 degrees, and 2) the Merrick syncline domain, 34 +/- 6 degrees. The Magic Mountain domain has rotated since 1 Ma. Both rotated sections occur in hangingwalls of active reverse faults: the Santa Susana and San Fernando faults, respectively. Two additional domains are unrotated: 1) The Van Norman Lake domain, directly south of the Santa Susana fault, and 2) the Soledad Canyon domain in the San Gabriel block immediately across the San Gabriel fault from Magic Mountain, suggesting that the San Gabriel fault might be a domain boundary. Plio-Pleistocene fragmentation and clockwise rotations continue at present, based on geodetic data, and represent crustal response to diffuse, oblique dextral shearing within the San Andreas fault system. The horizontal dimensions of the blocks are similar to the thickness of the seismogenic layer. The maximum magnitude of an earthquake based on this size of blocks is Mw = 6.7, comparable to the 1971 San Fernando and 1994 Northridge earthquakes and consistent with paleoseismic trenching and surface ruptures of the 1971 earthquake. The paleomagnetic results suggest that the blocks have retained their configuration for the past \\~ 0.8 million years. It is unlikely that multiple blocks in the study area combined to trigger much larger shocks during this period, in contrast to adjacent regions where events with magnitudes greater than 7 have been postulated based on paleoseismic excavations.

  3. The effect of earth rheology and ice-sheet size on fault slip and magnitude of postglacial earthquakes

    NASA Astrophysics Data System (ADS)

    Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, David W.

    2014-02-01

    Moderate past and present-day seismicity is observed in formerly deglaciated regions of North America and Europe. An understanding of the occurrence of these earthquakes is important for estimating the seismic risk within these areas as well as areas affected by recent ice-sheet melting, in particular Greenland and Antarctica. We have developed a new finite-element approach that allows us to estimate the fault throw for areas once covered by a continental ice sheet. The simulation is initialized by loading a two-dimensional earth model with an ice sheet. The model incorporates a stress field consisting of rebound, horizontal background and vertical stresses, as well as a fault that can accommodate slip. The sensitivity of fault throw and activation time is tested with respect to lithospheric and crustal thickness, viscosity structure of upper and lower mantle, and ice-sheet thickness and width, as well as fault location and angle. Single-event seismic displacements of up to 18.5 m are obtained, approximately equivalent to an earthquake with a moment magnitude of 8.5. The thickness of the crust and lithosphere are major parameters affecting the total magnitude of fault slip, whereas the size of the ice sheet primarily affects the activation time. Most faults start to move close to the end of deglaciation, and movement typically stops for our simulations after one thrusting/reverse earthquake. However, in our simulations faults with a dip of 60° also show several fault movements before and after the end of deglaciation.

  4. Shaky grounds of earthquake hazard assessment, forecasting, and prediction

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2012-12-01

    The quality of the fit of a trivial or, conversely, delicately-designed model to the observed natural phenomena is the fundamental pillar stone of any forecasting, including seismic hazard assessment, earthquake forecasting, and prediction. Using precise mathematical and logical systems outside their range of applicability can mislead to scientifically groundless conclusions, which unwise application can be extremely dangerous in assessing expected risk and losses. Are the relationships that are commonly used to assess seismic hazard enough valid to qualify for being useful laws describing earthquake sequences? Seismic evidences accumulated to-date demonstrate clearly that most of the empirical statistical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site. Seismic events, including mega-earthquakes, are clustered displaying behaviors that are far from independent. Their distribution in space is possibly fractal, definitely, far from uniform even in a single fault zone. Evidently, such a situation complicates design of reliable methodologies for earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. The situation is not hopeless due to available geological evidences and deterministic pattern recognition approaches, specifically, when intending to predict predictable, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades.

  5. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes

    NASA Astrophysics Data System (ADS)

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016), 10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  6. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes.

    PubMed

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016)10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  7. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes.

    PubMed

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016)10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard. PMID:27176281

  8. Earthquake prediction in Japan and natural time analysis of seismicity

    NASA Astrophysics Data System (ADS)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  9. [Comment on “A misuse of public funds: U.N. support for geomagnetic forecasting of earthquakes and meteorological disasters”] Comment: Earthquake prediction is worthy of study

    NASA Astrophysics Data System (ADS)

    Freund, Friedmann

    Imagine a densely populated region in the contiguous United States haunted over the past 25 years by nine big earthquakes of magnitudes 5.5 to 7.8, killing hundreds of thousands of people. Imagine further that in a singularly glorious instance a daring prediction effort, based on some scientifically poorly understood natural phenomena, led to the evacuation of a major city just 13 hours before an M = 7.8 earthquake hit. None of the inhabitants of the evacuated city died, while in the surrounding, nonevacuated communities 240,000 were killed and about 600,000 seriously injured. Imagine at last that, tragically, the prediction of the next earthquake of a similar magnitude failed, as well as the following one, at great loss of life.If this were an American scenario, the scientific community and the public at large would buzz with the glory of that one successful, life-saving earthquake prediction effort and with praise for American ingenuity The fact that the next predictions failed would likely have energized the public, the political bodies, the scientists, and the funding agencies alike to go after a recalcitrant Earth, to poke into her deep secrets with all means at the scientists' disposal, and to retrieve even the faintest signals that our restless planet may send out prior to unleashing her deadly punches.

  10. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... proposed earthquake predictions, on the completeness and scientific validity of the available data related... Council will receive several briefings on the history and current state of scientific investigations of..., and will be asked to advise the USGS on priorities for instrumentation and scientific...

  11. Impact of channel-like erosion patterns on the frequency-magnitude distribution of earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-07-01

    Reactive flow at depth (either related to underground activities, like enhancement of hydrocarbon recovery and CO2 storage, or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern. In a fractal structure of rupture process, we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in frequency-magnitude distribution (FMD). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downward translation of FMD: the slope of FMD (b-value) remains unchanged. The parameter-space study shows that the increase of b-value (of 0.08) is statistically significant for optimum characteristics of the erosion pattern with spacing to length ratio of the order of ˜1/40: large-magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  12. Assessing the magnitude of the 869 Jogan tsunami using sedimentary deposits: Prediction and consequence of the 2011 Tohoku-oki tsunami

    NASA Astrophysics Data System (ADS)

    Sugawara, Daisuke; Goto, Kazuhisa; Imamura, Fumihiko; Matsumoto, Hideaki; Minoura, Koji

    2012-12-01

    In this paper, the spatial distribution and sedimentological features of the 869 Jogan tsunami deposit along the Pacific coast of Japan are reviewed to evaluate deposit-based estimates of the magnitude of the Jogan tsunami and the use of tsunami deposits in the prediction of the 2011 Tohoku-oki earthquake and tsunami. Inundation of the Sendai Plain and the offshore wave sources of both tsunamis are compared. The Jogan tsunami deposit is ubiquitous on the coastal plains of Sendai Bay, whereas, to date, it is only identified in a few locations along the Sanriku and Joban Coasts. This resulted in an underprediction of the size of the wave source of the Tohoku-oki tsunami. The inland boundary of the inundation area of the Tohoku-oki tsunami on the Sendai Plain is approximately equivalent to that of the Jogan tsunami, although many sedimentological and geomorphologic factors make a direct comparison of the tsunamis complicated and difficult. The magnitude of the Jogan earthquake (Mw = 8.4), which was derived from the tsunami deposit inland extent and numerical inundation modeling, was too small to predict the magnitude of the Tohoku-oki earthquake (Mw = 9.0-9.1) and tsunami. Additional research is needed to improve deposit-based estimates of the magnitudes of past tsunamis and to increase the ability to use tsunami deposits, in conjunction with inundation modeling, to assess future tsunami hazards.

  13. Scientific investigation of macroscopic phenomena before the 2008 Wenchuan earthquake and its implication to prediction and tectonics

    NASA Astrophysics Data System (ADS)

    Huang, F.; Yang, Y.; Pan, B.

    2013-12-01

    tectonic/faults near the epicentral area. According to the statistic relationship, VI-VII degree intensity in meizoseismal area is equivalent to Magnitude 5. That implied that, generally, macroscopic anomaly easily occurred before earthquakes with magnitude more than 5 in the near epicenteral area. This information can be as pendent clues of earthquake occurrence in a tectonic area. Based on the above scientific investigation and statistic research we recalled other historical earthquakes occurred from 1937 to 1996 in Chinese mainland and got the similar results (Compilation of macroscopic anomalies before earthquakes, published by seismological press, 2009). This can be as one of important basic data to earthquake prediction. This work was supported by NSFC project No. 41274061.

  14. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Small Business Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of small businesses in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each business establishment size category to each Instrumental Intensity level. The analysis concerns the direct effect of the earthquake on small businesses. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by business establishment size.

  15. Impact of Channel-like Erosion Patterns on the Frequency-Magnitude Distribution of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-12-01

    Reactive flow at depth (either related to underground activities like enhancement of hydrocarbon recovery, CO2 storage or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern (Szymczak & Ladd, JGR, 2009). In a fractal structure of rupture process (Ide & Aochi, JGR, 2005), we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in Frequency-Magnitude Distribution FMD (Rohmer & Aochi, GJI, 2015). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downwards translation of FMD: the slope of FMD (b-value) remains unchanged. An in-depth parametric study was carried out by considering different pattern characteristics: spacing S varying from 0 to 100 and length L from 50 to 800 and fixing the width w=1. The figure shows that there is a region of optimum channels' characteristics for which the b-value of the Gutenberg Richter law is significantly modified with p-value ~10% (corresponding to area with red-coloured boundaries) given spacing to length ratio of the order of ~1/40: large magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. The decrease of the b-value between intact and altered fractures can reach values down to -0.08. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  16. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  17. Detection of Subtle Hydromechanical Medium Changes Caused By a Small-Magnitude Earthquake Swarm in NE Brazil

    NASA Astrophysics Data System (ADS)

    D'Hour, V.; Schimmel, M.; Do Nascimento, A. F.; Ferreira, J. M.; Lima Neto, H. C.

    2016-04-01

    Ambient noise correlation analyses are largely used in seismology to map heterogeneities and to monitor the temporal evolution of seismic velocity changes associated mostly with stress field variations and/or fluid movements. Here we analyse a small earthquake swarm related to a main mR 3.7 intraplate earthquake in North-East of Brazil to study the corresponding post-seismic effects on the medium. So far, post-seismic effects have been observed mainly for large magnitude events. In our study, we show that we were able to detect localized structural changes even for a small earthquake swarm in an intraplate setting. Different correlation strategies are presented and their performances are also shown. We compare the classical auto-correlation with and without pre-processing, including 1-bit normalization and spectral whitening, and the phase auto-correlation. The worst results were obtained for the pre-processed data due to the loss of waveform details. The best results were achieved with the phase cross-correlation which is amplitude unbiased and sensitive to small amplitude changes as long as there exist waveform coherence superior to other unrelated signals and noise. The analysis of 6 months of data using phase auto-correlation and cross-correlation resulted in the observation of a progressive medium change after the major recorded event. The progressive medium change is likely related to the swarm activity through opening new path ways for pore fluid diffusion. We further observed for the auto-correlations a lag time frequency-dependent change which likely indicates that the medium change is localized in depth. As expected, the main change is observed along the fault.

  18. The Komandor seismic gap: Earthquake prediction and tsunami computation

    NASA Astrophysics Data System (ADS)

    Lobkovsky, L. I.; Baranov, B. V.; Dozorova, K. A.; Mazova, R. Kh.; Kisel'man, B. A.; Baranova, N. A.

    2014-07-01

    The "seismic silence" period in the seismic gap in the region of the Komandor Islands (hereinafter, the Komandor seismic gap) is close to the duration of the maximal recurrence interval for the strongest earthquakes of the Aleutian Islands. This indicates the possibility of a strong earthquake occurring here in the nearest time. In the present work, the results of simulation for a tsunami from such an earthquake are presented. The scheme successfully used by the authors for the nearest analog—the 2004 Sumatra-Andaman earthquake—is applied. The magnitude of the supposed earthquake is assumed to be 9.0; the tsunamigenic source is about 650 km long and consists of 9 blocks. The parameters of the tsunami propagation in the Pacific Ocean and the characteristics of the waves on the coasts are computed for several possible scenarios of blocks' motion. The spectral analysis of the obtained wave characteristics is made and the effects of the wave front interference are found. Simulation has shown that the wave heights at some coastal sites can reach 9 m and, thus, may cause considerable destruction and deaths.

  19. Preliminary determination of the interdependence among strong-motion amplitude, earthquake magnitude and hypocentral distance for the Himalayan region

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Gusev, Alexander A.; Panza, Giuliano F.; Petukhin, Anatoly G.

    2001-03-01

    Since the installation of three limited-aperture strong-motion networks in the Himalayan region in 1986, six earthquakes with Mw=5.2-7.2 have been recorded up to 1991. The data set of horizontal peak accelerations and velocities consists of 182-component data for the hypocentral distance range 10-400km. This data set is limited in volume and coverage and, worst of all, it is highly inhomogeneous. Thus, we could not determine regional trends for amplitudes by means of the traditional approach of empirical multiple regression. Instead, we perform the reduction of the observations to a fixed distance and magnitude using independently defined distance and magnitude trends. To determine an appropriate magnitude-dependent distance attenuation law, we use the spectral energy propagation/random function approach of Gusev (1983) and adjust its parameters based on the residual variance. In doing so we confirm the known, rather gradual mode of decay of amplitudes with distance in the Himalayas; this seems to be caused by the combination of high Qs and crustal waveguide effects for high frequencies. The data are then reduced with respect to magnitude. The trend of peak acceleration versus magnitude cannot be determined from observations, and we assume that it coincides with that of abundant Japanese data. For the resulting set of reduced log10 (peak acceleration) data, the residual variance is 0.372, much above commonly found values. However, dividing the data into two geographical groups, western with two events and eastern with four events, reduces the residual variance to a more usual level of 0.272 (a station/site component of 0.222 and an event component of 0.162). This kind of data description is considered acceptable. A similar analysis is performed with velocity data, and again we have to split the data into two subregional groups. With our theoretically grounded attenuation laws we attempt a tentative extrapolation of our results to small distances and large

  20. By How Much Can Physics-Based Earthquake Simulations Reduce the Uncertainties in Ground Motion Predictions?

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Wang, F.

    2014-12-01

    Probabilistic seismic hazard analysis (PSHA) is the scientific basis for many engineering and social applications: performance-based design, seismic retrofitting, resilience engineering, insurance-rate setting, disaster preparation, emergency response, and public education. The uncertainties in PSHA predictions can be expressed as an aleatory variability that describes the randomness of the earthquake system, conditional on a system representation, and an epistemic uncertainty that characterizes errors in the system representation. Standard PSHA models use empirical ground motion prediction equations (GMPEs) that have a high aleatory variability, primarily because they do not account for the effects of crustal heterogeneities, which scatter seismic wavefields and cause local amplifications in strong ground motions that can exceed an order of magnitude. We show how much this variance can be lowered by simulating seismic wave propagation through 3D crustal models derived from waveform tomography. Our basic analysis tool is the new technique of averaging-based factorization (ABF), which uses a well-specified seismological hierarchy to decompose exactly and uniquely the logarithmic excitation functional into a series of uncorrelated terms that include unbiased averages of the site, path, hypocenter, and source-complexity effects (Feng & Jordan, Bull. Seismol. Soc. Am., 2014, doi:10.1785/0120130263). We apply ABF to characterize the differences in ground motion predictions between the standard GMPEs employed by the National Seismic Hazard Maps and the simulation-based CyberShake hazard model of the Southern California Earthquake Center. The ABF analysis indicates that, at low seismic frequencies (< 1 Hz), CyberShake site and path effects unexplained by the GMPEs account 40-50% of total residual variance. Therefore, accurate earthquake simulations have the potential for reducing the aleatory variance of the strong-motion predictions by about a factor of two, which would

  1. Single-station estimates of the seismic moment of the 1960 Chilean and 1964 Alaskan earthquakes, using the mantle magnitude M m

    NASA Astrophysics Data System (ADS)

    Okal, Emile A.; Talandier, Jacques

    1991-05-01

    Measurements are taken of the mantle magnitude M m , developed and introduced in previous papers, in the case of the 1960 Chilean and 1964 Alaskan earthquakes, by far the largest events ever recorded instrumentally. We show that the M m algorithm recovers the seismic moment of these gigantic earthquakes with an accuracy (typically 0.2 to 0.3 units of magnitude, or a factor of 1.5 to 2 on the seismic moment) comparable to that achieved on modern, digital, datasets. In particular, this study proves that the mantle magnitude M m does not saturate for large events, as do standard magnitude scales, but rather keeps growing with seismic moment, even for the very largest earthquakes. We further prove that the algorithm can be applied in unfavorable experimental conditions, such as instruments with poor response at mantle periods, seismograms clipped due to limited recording dynamics, or even on microbarograph records of air coupled Rayleigh waves. In addition, we show that it is feasible to use acoustic-gravity air waves generated by those very largest earthquakes, to obtain an estimate of the seismic moment of the event along the general philosophy of the magnitude concept: a single-station measurement ignoring the details of the earthquake's focal mechanism and exact depth.

  2. Predictability of population displacement after the 2010 Haiti earthquake.

    PubMed

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-07-17

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people's movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people's movements would have become less predictable. Instead, the predictability of people's trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought.

  3. Non-extensive statistical physics applied to heat flow and the earthquake frequency-magnitude distribution in Greece

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2016-08-01

    This study investigates seismicity in Greece and its relation to heat flow, based on the science of complex systems. Greece is characterised by a complex tectonic setting, which is represented mainly by active subduction, lithospheric extension and volcanism. The non-extensive statistical physics formalism is a generalisation of Boltzmann-Gibbs statistical physics and has been successfully used for the analysis of a variety of complex systems, where fractality and long-range interactions are important. Consequently, in this study, the frequency-magnitude distribution analysis was performed in a non-extensive statistical physics context, and the non-extensive parameter, qM, which is related to the frequency-magnitude distribution, was used as an index of the physical state of the studied area. Examination of the spatial distribution of qM revealed its relation to the spatial distribution of seismicity during the period 1976-2009. For focal depths ≤40 km, we observe that strong earthquakes coincide with high qM values. In addition, heat flow anomalies in Greece are known to be strongly related to crustal thickness; a thin crust and significant heat flow anomalies characterise the central Aegean region. Moreover, the data studied indicate that high heat flow is consistent with the absence of strong events and consequently with low qM values (high b-values) in the central Aegean region and around the volcanic arc. However, the eastern part of the volcanic arc exhibits strong earthquakes and high qM values whereas low qM values are found along the North Aegean Trough and southwest of Crete, despite the fact that strong events are present during the period 1976-2009 in both areas.

  4. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    USGS Publications Warehouse

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-01-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  5. Why is earthquake prediction research not progressing faster?

    NASA Astrophysics Data System (ADS)

    Wyss, Max

    2001-08-01

    As a physical phenomenon, earthquakes must be predictable to a certain degree. However, the problem is difficult, because the source volume inside the earth is inaccessible to direct observation and because the most important parameter, the stress level, cannot be measured directly. Also, seismology is such a young science that the cause of earthquakes was discovered in the 1960s only. Advanced seismograph networks as well as modern techniques to measure crustal deformations, such as the Global Positioning System (GPS) and the Synthetic Aperture Radar Interferometry technique (InSAR), have come on line only recently, and only in Japan are they deployed with the densities necessary for significant advances in the understanding of the rupture initiation process. In addition, no real program for earthquake prediction research exists in the United States, largely because funding agencies and peer reviewers shy away from a field in which unprofessional, but motivated individuals are active. Although claims of successful predictions are often not justified, a few correct predictions have been made. Most of these had time-windows of years, but some were accurate to days and allowed preparatory actions. To make significant progress, we must learn how to conduct rigorous science in a field where amateurs cannot be discouraged to venture. Leadership is necessary to raise the funding to an adequate level and to involve the best minds in this promising, potentially extremely rewarding, but controversial research topic.

  6. Magnitudes and locations of the 1811-1812 New Madrid, Missouri, and the 1886 Charleston, South Carolina, earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Hopper, M.G.

    2004-01-01

    We estimate locations and moment magnitudes M and their uncertainties for the three largest events in the 1811-1812 sequence near New Madrid, Missouri, and for the 1 September 1886 event near Charleston, South Carolina. The intensity magnitude M1, our preferred estimate of M, is 7.6 for the 16 December 1811 event that occurred in the New Madrid seismic zone (NMSZ) on the Bootheel lineament or on the Blytheville seismic zone. M1, is 7.5 for the 23 January 1812 event for a location on the New Madrid north zone of the NMSZ and 7.8 for the 7 February 1812 event that occurred on the Reelfoot blind thrust of the NMSZ. Our preferred locations for these events are located on those NMSZ segments preferred by Johnston and Schweig (1996). Our estimates of M are 0.1-0.4 M units less than those of Johnston (1996b) and 0.3-0.5 M units greater than those of Hough et al. (2000). M1 is 6.9 for the 1 September 1886 event for a location at the Summerville-Middleton Place cluster of recent small earthquakes located about 30 km northwest of Charleston.

  7. Subionospheric VLF/LF Probing of Ionospheric Perturbations Associated with Earthquakes: A Possibility of Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Hayakawa, Masashi; Horie, Takumi; Muto, Fumiya; Kasahara, Yasushi; Ohta, Kenji; Liu, Jann-Yenq; Hobara, Yasuhide

    The VLF (Very Low Frequency) / LF (Low Frequency) receiving network has been established in Japan, which is composed of seven observing stations (Moshiri (Hokkaido), Chofu (Tokyo, UEC, University of Electro-Communications), Tateyama (Chiba), Shimizu (Shizuoka), Kasugai (Aichi), Maizuru (Kyoto) and Kochi (Kochi)), and three additional foreign stations have been established in Kamchatka, Taiwan and Indonesia. At each station we observe simultaneously several VLF/LF transmitter signals (two Japanese transmitters with call signals fo JJY (Fukushima), JJI (Miyazaki)), and foreign VLF transmitters (NWC (Western Australia, Australia), NPM (Hawaii, USA), NLK (Washington, USA)). This Japanese VLF/LF network is used to study the ionospheric perturbations associated with earthquakes, and we present two recent results; (1) a statistical result on the correlation between VLF/LF propagation anomalies and earthquakes, and (2) the latest results during the last six months on the two particular propagation paths; JJY-Moshiri and JJY-Taiwan. Then, we discuss the correlation of ionospheric perturbations with earthquakes in the sense of a possibility of earthquake prediction by means of VLF propagation anomalies.

  8. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Labor Market Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of economic Super Sectors in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each Super Sector to each Instrumental Intensity level. The analysis concerns the direct effect of the scenario earthquake on economic sectors and provides a baseline for the indirect and interactive analysis of an input-output model of the regional economy. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by the North American Industry Classification System. According to the analysis results, nearly 225,000 business

  9. Evidence for the recurrence of large-magnitude earthquakes along the Makran coast of Iran and Pakistan

    USGS Publications Warehouse

    Page, W.D.; Alt, J.N.; Cluff, L.S.; Plafker, G.

    1979-01-01

    The presence of raised beaches and marine terraces along the Makran coast indicates episodic uplift of the continental margin resulting from large-magnitude earthquakes. The uplift occurs as incremental steps similar in height to the 1-3 m of measured uplift resulting from the November 28, 1945 (M 8.3) earthquake at Pasni and Ormara, Pakistan. The data support an E-W-trending, active subduction zone off the Makran coast. The raised beaches and wave-cut terraces along the Makran coast are extensive with some terraces 1-2 km wide, 10-15 m long and up to 500 m in elevation. The terraces are generally capped with shelly sandstones 0.5-5 m thick. Wave-cut cliffs, notches, and associated boulder breccia and swash troughs are locally preserved. Raised Holocene accretion beaches, lagoonal deposits, and tombolos are found up to 10 m in elevation. The number and elevation of raised wave-cut terraces along the Makran coast increase eastward from one at Jask, the entrance to the Persian Gulf, at a few meters elevation, to nine at Konarak, 250 km to the east. Multiple terraces are found on the prominent headlands as far east as Karachi. The wave-cut terraces are locally tilted and cut by faults with a few meters of displacement. Long-term, average rates of uplift were calculated from present elevation, estimated elevation at time of deposition, and 14C and U-Th dates obtained on shells. Uplift rates in centimeters per year at various locations from west to east are as follows: Jask, 0 (post-Sangamon); Konarak, 0.031-0.2 (Holocene), 0.01 (post-Sangamon); Ormara 0.2 (Holocene). ?? 1979.

  10. 75 FR 63854 - National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... completeness and scientific validity of the available data related to earthquake predictions, and on related... focus on: (1) Methods for rapidly estimating the probability of a large earthquake following a...

  11. Magnitude correlations in global seismicity

    SciTech Connect

    Sarlis, N. V.

    2011-08-15

    By employing natural time analysis, we analyze the worldwide seismicity and study the existence of correlations between earthquake magnitudes. We find that global seismicity exhibits nontrivial magnitude correlations for earthquake magnitudes greater than M{sub w}6.5.

  12. Spatial variations in the frequency-magnitude distribution of earthquakes at Soufriere Hills Volcano, Montserrat, West Indies

    USGS Publications Warehouse

    Power, J.A.; Wyss, M.; Latchman, J.L.

    1998-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is determined as a function of space beneath Soufriere Hills Volcano, Montserrat, from data recorded between August 1, 1995 and March 31, 1996. A volume of anomalously high b-values (b > 3.0) with a 1.5 km radius is imaged at depths of 0 and 1.5 km beneath English's Crater and Chance's Peak. This high b-value anomaly extends southwest to Gage's Soufriere. At depths greater than 2.5 km volumes of comparatively low b-values (b-1) are found beneath St. George's Hill, Windy Hill, and below 2.5 km depth and to the south of English's Crater. We speculate the depth of high b-value anomalies under volcanoes may be a function of silica content, modified by some additional factors, with the most siliceous having these volumes that are highly fractured or contain high pore pressure at the shallowest depths. Copyright 1998 by the American Geophysical Union.

  13. Systematic correlations of the earthquake frequency-magnitude distribution with the deformation and mechanical regimes in the Taiwan orogen

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Ling; Hung, Shu-Huei; Jiang, Juen-Shi; Chiao, Ling-Yun

    2016-05-01

    We investigate the correlation of the earthquake frequency-magnitude distribution with the style of faulting and stress in Taiwan. The b values estimated for three types of focal mechanisms show significant differences with the lowest for thrust, intermediate for strike slip, and highest value for normal events, consistent with those found in global and other regional seismicity. Lateral distribution of the b values shows a good correlation with the predominant faulting mechanism, crustal deformation, and stress patterns. The two N-S striking thrust zones in western and eastern Taiwan under the larger E-W shortening and differential stress yield the lower b values than those in the in-between mountain ranges subject to the smaller extensional stress and dominated by strike slip and normal faults. The termination of the monotonically decreasing b value with depth at ~15-20 km corroborates its inverse relationship with stress and the existence of the brittle-plastic transition in the weak middle crust beneath the Taiwan orogen.

  14. A theoretical study of correlation between scaled energy and earthquake magnitude based on two source displacement models

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2013-12-01

    The correlation of the scaled energy, ê = E s/ M 0, versus earthquake magnitude, M s, is studied based on two models: (1) Model 1 based on the use of the time function of the average displacements, with a ω -2 source spectrum, across a fault plane; and (2) Model 2 based on the use of the time function of the average displacements, with a ω -3 source spectrum, across a fault plane. For the second model, there are two cases: (a) As τ ≒ T, where τ is the rise time and T the rupture time, lg( ê) ~ - M s; and (b) As τ ≪ T, lg( ê) ~ -(1/2) M s. The second model leads to a negative value of ê. This means that Model 2 cannot work for studying the present problem. The results obtained from Model 1 suggest that the source model is a factor, yet not a unique one, in controlling the correlation of ê versus M s.

  15. Possibility of Earthquake-prediction by analyzing VLF signals

    NASA Astrophysics Data System (ADS)

    Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta

    2016-07-01

    Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

  16. Ground motion prediction and earthquake scenarios in the volcanic region of Mt. Etna (Southern Italy

    NASA Astrophysics Data System (ADS)

    Langer, Horst; Tusa, Giuseppina; Luciano, Scarfi; Azzaro, Raffaela

    2013-04-01

    One of the principal issues in the assessment of seismic hazard is the prediction of relevant ground motion parameters, e. g., peak ground acceleration, radiated seismic energy, response spectra, at some distance from the source. Here we first present ground motion prediction equations (GMPE) for horizontal components for the area of Mt. Etna and adjacent zones. Our analysis is based on 4878 three component seismograms related to 129 seismic events with local magnitudes ranging from 3.0 to 4.8, hypocentral distances up to 200 km, and focal depth shallower than 30 km. Accounting for the specific seismotectonic and geological conditions of the considered area we have divided our data set into three sub-groups: (i) Shallow Mt. Etna Events (SEE), i.e., typically volcano-tectonic events in the area of Mt. Etna having a focal depth less than 5 km; (ii) Deep Mt. Etna Events (DEE), i.e., events in the volcanic region, but with a depth greater than 5 km; (iii) Extra Mt. Etna Events (EEE), i.e., purely tectonic events falling outside the area of Mt. Etna. The predicted PGAs for the SEE are lower than those predicted for the DEE and the EEE, reflecting their lower high-frequency energy content. We explain this observation as due to the lower stress drops. The attenuation relationships are compared to the ones most commonly used, such as by Sabetta and Pugliese (1987)for Italy, or Ambraseys et al. (1996) for Europe. Whereas our GMPEs are based on small earthquakes, the magnitudes covered by the two above mentioned attenuation relationships regard moderate to large magnitudes (up to 6.8 and 7.9, respectively). We show that the extrapolation of our GMPEs to magnitues beyond the range covered by the data is misleading; at the same time also the afore mentioned relationships fail to predict ground motion parameters for our data set. Despite of these discrepancies, we can exploit our data for setting up scenarios for strong earthquakes for which no instrumental recordings are

  17. Variability of sporadic E-layer semi transparency (foEs-fbEs)with magnitude and distance from earthquake epicenters to vertical sounding stations

    NASA Astrophysics Data System (ADS)

    Liperovskaya, E. V.; Pokhotelov, O. A.; Hobara, Y.; Parrot, M.

    Variations of the Es-layer semi transparency co-efficient were analyzed for more than 100 earthquakes with magnitudes M > 4 and depths h < 100 km. Data of mid latitude vertical sounding stations (Kokubunji, Akita, and Yam-agawa) have been used for several decades before and after earthquake occurrences. The semi-transparency coefficient of Es-layer X = (foEs - fbEs)/fbEs can characterize, for thin layers, the presence of small scale plasma turbulence. It is shown that the turbulence level decreases by ~ 10% during three days before earthquakes probably due to the heating of the atmosphere. On the contrary, the turbulence level increases by the same value from one to three days after the shocks. For earthquakes with magnitudes M > 5 the effect exists at distances up to 300 km from the epicenters. The effect could also exist for weak (M ~ 4) and shallow (depth < 50 km) earthquakes at a distance smaller than 200 km from the epicenters.

  18. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  19. Calibration of the landsliding numerical model SLIPOS and prediction of the seismically induced erosion for several large earthquakes scenarios

    NASA Astrophysics Data System (ADS)

    Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark

    2016-04-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.

  20. Large-magnitude, late Holocene earthquakes on the Genoa fault, West-Central Nevada and Eastern California

    USGS Publications Warehouse

    Ramelli, A.R.; Bell, J.W.; DePolo, C.M.; Yount, J.C.

    1999-01-01

    The Genoa fault, a principal normal fault of the transition zone between the Basin and Range Province and the northern Sierra Nevada, displays a large and conspicuous prehistoric scarp. Three trenches excavated across this scarp exposed two large-displacement, late Holocene events. Two of the trenches contained multiple layers of stratified charcoal, yielding radiocarbon ages suggesting the most recent and penultimate events on the main part of the fault occurred 500-600 cal B.P., and 2000-2200 cal B.P., respectively. Normal-slip offsets of 3-5.5 m per event along much of the rupture length are comparable to the largest historical Basin and Range Province earthquakes, suggesting these paleoearthquakes were on the order of magnitude 7.2-7.5. The apparent late Holocene slip rate (2-3 mm/yr) is one of the highest in the Basin and Range Province. Based on structural and behavioral differences, the Genoa fault is here divided into four principal sections (the Sierra, Diamond Valley, Carson Valley, and Jacks Valley sections) and is distinguished from three northeast-striking faults in the Carson City area (the Kings Canyon, Carson City, and Indian Hill faults). The conspicuous scarp extends for nearly 25 km, the combined length of the Carson Valley and Jacks Valley sections. The Diamond Valley section lacks the conspicuous scarp, and older alluvial fans and bedrock outcrops on the downthrown side of the fault indicate a lower activity rate. Activity further decreases to the south along the Sierra section, which consists of numerous distributed faults. All three northeast-striking faults in the Carson City area ruptured within the past few thousand years, and one or more may have ruptured during recent events on the Genoa fault.

  1. Network of seismo-geochemical monitoring observatories for earthquake prediction research in India

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Hirok; Barman, Chiranjib; Iyengar, A.; Ghose, Debasis; Sen, Prasanta; Sinha, Bikash

    2013-08-01

    Present paper deals with a brief review of the research carried out to develop multi-parametric gas-geochemical monitoring facilities dedicated to earthquake prediction research in India by installing a network of seismo-geochemical monitoring observatories at different regions of the country. In an attempt to detect earthquake precursors, the concentrations of helium, argon, nitrogen, methane, radon-222 (222Rn), polonium-218 (218Po), and polonium-214 (214Po) emanating from hydrothermal systems are monitored continuously and round the clock at these observatories. In this paper, we make a cross correlation study of a number of geochemical anomalies recorded at these observatories. With the data received from each of the above observatories we attempt to make a time series analysis to relate magnitude and epicentral distance locations through statistical methods, empirical formulations that relate the area of influence to earthquake scale. Application of the linear and nonlinear statistical techniques in the recorded geochemical data sets reveal a clear signature of long-range correlation in the data sets.

  2. Earthquake forecasting and warning

    SciTech Connect

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  3. Diking-induced moderate-magnitude earthquakes on a youthful rift border fault: The 2002 Nyiragongo-Kalehe sequence, D.R. Congo

    NASA Astrophysics Data System (ADS)

    Wauthier, C.; Smets, B.; Keir, D.

    2015-12-01

    On 24 October 2002, Mw 6.2 earthquake occurred in the central part of the Lake Kivu basin, Western Branch of the East African Rift. This is the largest event recorded in the Lake Kivu area since 1900. An integrated analysis of radar interferometry (InSAR), seismic and geological data, demonstrates that the earthquake occurred due to normal-slip motion on a major preexisting east-dipping rift border fault. A Coulomb stress analysis suggests that diking events, such as the January 2002 dike intrusion, could promote faulting on the western border faults of the rift in the central part of Lake Kivu. We thus interpret that dike-induced stress changes can cause moderate to large-magnitude earthquakes on major border faults during continental rifting. Continental extension processes appear complex in the Lake Kivu basin, requiring the use of a hybrid model of strain accommodation and partitioning in the East African Rift.

  4. Prediction model of earthquake with the identification of earthquake source polarity mechanism through the focal classification using ANFIS and PCA technique

    NASA Astrophysics Data System (ADS)

    Setyonegoro, W.

    2016-05-01

    Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.

  5. The 1170 and 1202 CE Dead Sea Rift earthquakes and long-term magnitude distribution of the Dead Sea Fault zone

    USGS Publications Warehouse

    Hough, S.E.; Avni, R.

    2009-01-01

    In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.

  6. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  7. Improved instrumental magnitude prediction expected from version 2 of the NASA SKY2000 master star catalog

    NASA Technical Reports Server (NTRS)

    Sande, C. B.; Brasoveanu, D.; Miller, A. C.; Home, A. T.; Tracewell, D. A.; Warren, W. H., Jr.

    1998-01-01

    The SKY2000 Master Star Catalog (MC), Version 2 and its predecessors have been designed to provide the basic astronomical input data needed for satellite acquisition and attitude determination on NASA spacecraft. Stellar positions and proper motions are the primary MC data required for operations support followed closely by the stellar brightness observed in various standard astronomical passbands. The instrumental red-magnitude prediction subsystem (REDMAG) in the MMSCAT software package computes the expected instrumental color index (CI) [sensor color correction] from an observed astronomical stellar magnitude in the MC and the characteristics of the stellar spectrum, astronomical passband, and sensor sensitivity curve. The computation is more error prone the greater the mismatch of the sensor sensitivity curve characteristics and those of the observed astronomical passbands. This paper presents the preliminary performance analysis of a typical red-sensitive CCDST during acquisition of sensor data from the two Ball CT-601 ST's onboard the Rossi X-Ray Timing Explorer (RXTE). A comparison is made of relative star positions measured in the ST FOV coordinate system with the expected results computed from the recently released Tycho Catalogue. The comparison is repeated for a group of observed stars with nearby, bright neighbors in order to determine the tracker behavior in the presence of an interfering, near neighbor (NN). The results of this analysis will be used to help define a new photoelectric photometric instrumental sensor magnitude system (S) that is based on several thousand bright star magnitudes observed with the PXTE ST's. This new system will be implemented in Version 2 of the SKY2000 MC to provide improved predicted magnitudes in the mission run catalogs.

  8. Comparison of strong-motion spectra with teleseismic spectra for three magnitude 8 subduction-zone earthquakes

    NASA Astrophysics Data System (ADS)

    Houston, Heidi; Kanamori, Hiroo

    1990-08-01

    A comparison of strong-motion spectra and teleseismic spectra was made for three Mw 7.8 to 8.0 earthquakes: the 1985 Michoacan (Mexico) earthquake, the 1985 Valparaiso (Chile) earthquake, and the 1983 Akita-Oki (Japan) earthquake. The decay of spectral amplitude with the distance from the station was determined, considering different measures of distance from a finite fault, and it was found to be different for these three events. The results can be used to establish empirical relations between the observed spectra and the half-space responses depending on the distance and the site condition, making it possible to estimate strong motions from source spectra determined from teleseismic records.

  9. Update of the Graizer-Kalkan ground-motion prediction equations for shallow crustal continental earthquakes

    USGS Publications Warehouse

    Graizer, Vladimir; Kalkan, Erol

    2015-01-01

    A ground-motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration and 5-percent damped pseudo spectral acceleration response ordinates of maximum horizontal component of randomly oriented ground motions was developed by Graizer and Kalkan (2007, 2009) to be used for seismic hazard analyses and engineering applications. This GMPE was derived from the greatly expanded Next Generation of Attenuation (NGA)-West1 database. In this study, Graizer and Kalkan’s GMPE is revised to include (1) an anelastic attenuation term as a function of quality factor (Q0) in order to capture regional differences in large-distance attenuation and (2) a new frequency-dependent sedimentary-basin scaling term as a function of depth to the 1.5-km/s shear-wave velocity isosurface to improve ground-motion predictions for sites on deep sedimentary basins. The new model (GK15), developed to be simple, is applicable to the western United States and other regions with shallow continental crust in active tectonic environments and may be used for earthquakes with moment magnitudes 5.0–8.0, distances 0–250 km, average shear-wave velocities 200–1,300 m/s, and spectral periods 0.01–5 s. Directivity effects are not explicitly modeled but are included through the variability of the data. Our aleatory variability model captures inter-event variability, which decreases with magnitude and increases with distance. The mixed-effects residuals analysis shows that the GK15 reveals no trend with respect to the independent parameters. The GK15 is a significant improvement over Graizer and Kalkan (2007, 2009), and provides a demonstrable, reliable description of ground-motion amplitudes recorded from shallow crustal earthquakes in active tectonic regions over a wide range of magnitudes, distances, and site conditions.

  10. The spatiotemporal analysis of the minimum magnitude of completeness Mc and the Gutenberg-Richter law b-value parameter using the earthquake catalog of Greece

    NASA Astrophysics Data System (ADS)

    Popandopoulos, G. A.; Baskoutas, I.; Chatziioannou, E.

    2016-03-01

    Spatiotemporal mapping the minimum magnitude of completeness Mc and b-value of the Gutenberg-Richter law is conducted for the earthquake catalog data of Greece. The data were recorded by the seismic network of the Institute of Geodynamics of the National Observatory of Athens (GINOA) in 1970-2010 and by the Hellenic Unified Seismic Network (HUSN) in 2011-2014. It is shown that with the beginning of the measurements at HUSN, the number of the recorded events more than quintupled. The magnitude of completeness Mc of the earthquake catalog for 1970-2010 varies within 2.7 to 3.5, whereas starting from April 2011 it decreases to 1.5-1.8 in the central part of the region and fluctuates around the average of 2.0 in the study region overall. The magnitude of completeness Mc and b-value for the catalogs of the earthquakes recorded by the old (GINOA) and new (HUSN) seismic networks are compared. It is hypothesized that the magnitude of completeness Mc may affect the b-value estimates. The spatial distribution of the b-value determined from the HUSN catalog data generally agrees with the main geotectonic features of the studied territory. It is shown that the b-value is below 1 in the zones of compression and is larger than or equal to 1 in the zones dominated by extension. The established depth dependence of the b-value is pretty much consistent with the hypothesis of a brittle-ductile transition zone existing in the Earth's crust. It is assumed that the source depth of a strong earthquake can probably be estimated from the depth distribution of the b-value, which can be used for seismic hazard assessment.

  11. Brief Communication: On the source characteristics and impacts of the magnitude 7.2 Bohol earthquake, Philippines

    NASA Astrophysics Data System (ADS)

    Lagmay, A. M. F.; Eco, R.

    2014-10-01

    A devastating earthquake struck Bohol, Philippines, on 15 October 2013. The earthquake originated at 12 km depth from an unmapped reverse fault, which manifested on the surface for several kilometers and with maximum vertical displacement of 3 m. The earthquake resulted in 222 fatalities with damage to infrastructure estimated at USD 52.06 million. Widespread landslides and sinkholes formed in the predominantly limestone region during the earthquake. These remain a significant threat to communities as destabilized hillside slopes, landslide-dammed rivers and incipient sinkholes are still vulnerable to collapse, triggered possibly by aftershocks and heavy rains in the upcoming months of November and December. The most recent fatal temblor originated from a previously unmapped fault, herein referred to as the Inabanga Fault. Like the hidden or previously unmapped faults responsible for the 2012 Negros and 2013 Bohol earthquakes, there may be more unidentified faults that need to be mapped through field and geophysical methods. This is necessary to mitigate the possible damaging effects of future earthquakes in the Philippines.

  12. Earthquake prediction research at the Seismological Laboratory, California Institute of Technology

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquake prediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below. 

  13. Historical precipitation predictably alters the shape and magnitude of microbial functional response to soil moisture.

    PubMed

    Averill, Colin; Waring, Bonnie G; Hawkes, Christine V

    2016-05-01

    Soil moisture constrains the activity of decomposer soil microorganisms, and in turn the rate at which soil carbon returns to the atmosphere. While increases in soil moisture are generally associated with increased microbial activity, historical climate may constrain current microbial responses to moisture. However, it is not known if variation in the shape and magnitude of microbial functional responses to soil moisture can be predicted from historical climate at regional scales. To address this problem, we measured soil enzyme activity at 12 sites across a broad climate gradient spanning 442-887 mm mean annual precipitation. Measurements were made eight times over 21 months to maximize sampling during different moisture conditions. We then fit saturating functions of enzyme activity to soil moisture and extracted half saturation and maximum activity parameter values from model fits. We found that 50% of the variation in maximum activity parameters across sites could be predicted by 30-year mean annual precipitation, an indicator of historical climate, and that the effect is independent of variation in temperature, soil texture, or soil carbon concentration. Based on this finding, we suggest that variation in the shape and magnitude of soil microbial response to soil moisture due to historical climate may be remarkably predictable at regional scales, and this approach may extend to other systems. If historical contingencies on microbial activities prove to be persistent in the face of environmental change, this approach also provides a framework for incorporating historical climate effects into biogeochemical models simulating future global change scenarios.

  14. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    SciTech Connect

    Takeda, Fumihide; Takeo, Makoto

    2004-12-09

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area.

  15. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  16. Validation of a ground motion synthesis and prediction methodology for the 1988, M=6.0, Saguenay Earthquake

    SciTech Connect

    Hutchings, L.; Jarpe, S.; Kasameyer, P.; Foxall, W.

    1998-01-01

    We model the 1988, M=6.0, Saguenay earthquake. We utilize an approach that has been developed to predict strong ground motion. this approach involves developing a set of rupture scenarios based upon bounds on rupture parameters. rupture parameters include rupture geometry, hypocenter, rupture roughness, rupture velocity, healing velocity (rise times), slip distribution, asperity size and location, and slip vector. Scenario here refers to specific values of these parameters for an hypothesized earthquake. Synthetic strong ground motion are then generated for each rupture scenario. A sufficient number of scenarios are run to span the variability in strong ground motion due to the source uncertainties. By having a suite of rupture scenarios of hazardous earthquakes for a fixed magnitude and identifying the hazard to the site from the one standard deviation value of engineering parameters we have introduced a probabilistic component to the deterministic hazard calculation, For this study we developed bounds on rupture scenarios from previous research on this earthquake. The time history closest to the observed ground motion was selected as a model for the Saguenay earthquake.

  17. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when

  18. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  19. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    NASA Astrophysics Data System (ADS)

    Böse, Maren; Graves, Robert W.; Gill, David; Callaghan, Scott; Maechling, Philip J.

    2014-09-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0-10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a `proof of concept', being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (˜20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least `moderate', `strong' or `very strong' shaking in the Los Angeles (LA) basin

  20. Prospectively Evaluating the Collaboratory for the Study of Earthquake Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria

    2016-04-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP

  1. Fluid-faulting evolution in high definition: Connecting fault structure and frequency-magnitude variations during the 2014 Long Valley Caldera, California, earthquake swarm

    NASA Astrophysics Data System (ADS)

    Shelly, David R.; Ellsworth, William L.; Hill, David P.

    2016-03-01

    An extended earthquake swarm occurred beneath southeastern Long Valley Caldera between May and November 2014, culminating in three magnitude 3.5 earthquakes and 1145 cataloged events on 26 September alone. The swarm produced the most prolific seismicity in the caldera since a major unrest episode in 1997-1998. To gain insight into the physics controlling swarm evolution, we used large-scale cross correlation between waveforms of cataloged earthquakes and continuous data, producing precise locations for 8494 events, more than 2.5 times the routine catalog. We also estimated magnitudes for 18,634 events (~5.5 times the routine catalog), using a principal component fit to measure waveform amplitudes relative to cataloged events. This expanded and relocated catalog reveals multiple episodes of pronounced hypocenter expansion and migration on a collection of neighboring faults. Given the rapid migration and alignment of hypocenters on narrow faults, we infer that activity was initiated and sustained by an evolving fluid pressure transient with a low-viscosity fluid, likely composed primarily of water and CO2 exsolved from underlying magma. Although both updip and downdip migration were observed within the swarm, downdip activity ceased shortly after activation, while updip activity persisted for weeks at moderate levels. Strongly migrating, single-fault episodes within the larger swarm exhibited a higher proportion of larger earthquakes (lower Gutenberg-Richter b value), which may have been facilitated by fluid pressure confined in two dimensions within the fault zone. In contrast, the later swarm activity occurred on an increasingly diffuse collection of smaller faults, with a much higher b value.

  2. Determination of focal mechanisms of intermediate-magnitude earthquakes in Mexico, based on Greens functions calculated for a 3D Earth model

    NASA Astrophysics Data System (ADS)

    Rodrigo Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala

    2015-04-01

    One important ingredient in the study of the complex active tectonics in Mexico is the analysis of earthquake focal mechanisms, or the seismic moment tensor. They can be determined trough the calculation of Green functions and subsequent inversion for moment-tensor parameters. However, this calculation is gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes excite waves of longer periods that interact weakly with laterally heterogeneities in the crust. For these earthquakes, using 1D velocity models to compute the Greens fucntions works well. The opposite occurs for smaller and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle and requires more specific or regional 3D models. In this study, we calculate Greens functions for earthquakes in Mexico using a laterally heterogeneous seismic wave speed model, comprised of mantle model S362ANI (Kustowski et al 2008) and crustal model CRUST 2.0 (Bassin et al 1990). Subsequently, we invert the observed seismograms for the seismic moment tensor using a method developed by Liu et al (2004) an implemented by Óscar de La Vega (2014) for earthquakes in Mexico. By following a brute force approach, in which we include all observed Rayleigh and Love waves of the Mexican National Seismic Network (Servicio Sismológico Naciona, SSN), we obtain reliable focal mechanisms for events that excite a considerable amount of low frequency waves (Mw > 4.8). However, we are not able to consistently estimate focal mechanisms for smaller events using this method, due to high noise levels in many of the records. Excluding the noisy records, or noisy parts of the records manually, requires interactive edition of the data, using an efficient tool for the editing. Therefore, we developed a graphical user interface (GUI), based on python and the python library ObsPy, that allows the edition of observed and

  3. Repeated large-magnitude earthquakes in a tectonically active, low-strain continental interior: The northern Tien Shan, Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Landgraf, A.; Dzhumabaeva, A.; Abdrakhmatov, K. E.; Strecker, M. R.; Macaulay, E. A.; Arrowsmith, Jr.; Sudhaus, H.; Preusser, F.; Rugel, G.; Merchel, S.

    2016-05-01

    The northern Tien Shan of Kyrgyzstan and Kazakhstan has been affected by a series of major earthquakes in the late 19th and early 20th centuries. To assess the significance of such a pulse of strain release in a continental interior, it is important to analyze and quantify strain release over multiple time scales. We have undertaken paleoseismological investigations at two geomorphically distinct sites (Panfilovkoe and Rot Front) near the Kyrgyz capital Bishkek. Although located near the historic epicenters, both sites were not affected by these earthquakes. Trenching was accompanied by dating stratigraphy and offset surfaces using luminescence, radiocarbon, and 10Be terrestrial cosmogenic nuclide methods. At Rot Front, trenching of a small scarp did not reveal evidence for surface rupture during the last 5000 years. The scarp rather resembles an extensive debris-flow lobe. At Panfilovkoe, we estimate a Late Pleistocene minimum slip rate of 0.2 ± 0.1 mm/a, averaged over at least two, probably three earthquake cycles. Dip-slip reverse motion along segmented, moderately steep faults resulted in hanging wall collapse scarps during different events. The most recent earthquake occurred around 3.6 ± 1.3 kyr ago (1σ), with dip-slip offsets between 1.2 and 1.4 m. We calculate a probabilistic paleomagnitude to be between 6.7 and 7.2, which is in agreement with regional data from the Kyrgyz range. The morphotectonic signals in the northern Tien Shan are a prime example of deformation in a tectonically active intracontinental mountain belt and as such can help understand the longer-term coevolution of topography and seismogenic processes in similar structural settings worldwide.

  4. A Test of a Strong Ground Motion Prediction Methodology for the 7 September 1999, Mw=6.0 Athens Earthquake

    SciTech Connect

    Hutchings, L; Ioannidou, E; Voulgaris, N; Kalogeras, I; Savy, J; Foxall, W; Stavrakakis, G

    2004-08-06

    We test a methodology to predict the range of ground-motion hazard for a fixed magnitude earthquake along a specific fault or within a specific source volume, and we demonstrate how to incorporate this into probabilistic seismic hazard analyses (PSHA). We modeled ground motion with empirical Green's functions. We tested our methodology with the 7 September 1999, Mw=6.0 Athens earthquake, we: (1) developed constraints on rupture parameters based on prior knowledge of earthquake rupture processes and sources in the region; (2) generated impulsive point shear source empirical Green's functions by deconvolving out the source contribution of M < 4.0 aftershocks; (3) used aftershocks that occurred throughout the area and not necessarily along the fault to be modeled; (4) ran a sufficient number of scenario earthquakes to span the full variability of ground motion possible; (5) found that our distribution of synthesized ground motions span what actually occurred and their distribution is realistically narrow; (6) determined that one of our source models generates records that match observed time histories well; (7) found that certain combinations of rupture parameters produced ''extreme'' ground motions at some stations; (8) identified that the ''best fitting'' rupture models occurred in the vicinity of 38.05{sup o} N 23.60{sup o} W with center of rupture near 12 km, and near unilateral rupture towards the areas of high damage, and this is consistent with independent investigations; and (9) synthesized strong motion records in high damage areas for which records from the earthquake were not recorded. We then developed a demonstration PSHA for a source region near Athens utilizing synthesized ground motion rather that traditional attenuation. We synthesized 500 earthquakes distributed throughout the source zone likely to have Mw=6.0 earthquakes near Athens. We assumed an average return period of 1000 years for this magnitude earthquake in the particular source zone

  5. Earthquake mechanism and predictability shown by a laboratory fault

    USGS Publications Warehouse

    King, C.-Y.

    1994-01-01

    Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.

  6. Turning the rumor of May 11, 2011 earthquake prediction In Rome, Italy, into an information day on earthquake hazard

    NASA Astrophysics Data System (ADS)

    Amato, A.; Cultrera, G.; Margheriti, L.; Nostro, C.; Selvaggi, G.; INGVterremoti Team

    2011-12-01

    A devastating earthquake had been predicted for May 11, 2011 in Rome. This prediction was never released officially by anyone, but it grew up in the Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions. Indeed, around May 11, 2011, a planetary alignment was really expected and this contributed to give credibility to the earthquake prediction among people. During the previous months, INGV was overwhelmed with requests for information about this supposed prediction by Roman inhabitants and tourists. Given the considerable mediatic impact of this expected earthquake, INGV decided to organize an Open Day in its headquarter in Rome for people who wanted to learn more about the Italian seismicity and the earthquake as natural phenomenon. The Open Day was preceded by a press conference two days before, in which we talked about this prediction, we presented the Open Day, and we had a scientific discussion with journalists about the earthquake prediction and more in general on the real problem of seismic risk in Italy. About 40 journalists from newspapers, local and national tv's, press agencies and web news attended the Press Conference and hundreds of articles appeared in the following days, advertising the 11 May Open Day. The INGV opened to the public all day long (9am - 9pm) with the following program: i) meetings with INGV researchers to discuss scientific issues; ii) visits to the seismic monitoring room, open 24h/7 all year; iii) guided tours through interactive exhibitions on earthquakes and Earth's deep structure; iv) lectures on general topics from the social impact of rumors to seismic risk reduction; v) 13 new videos on channel YouTube.com/INGVterremoti to explain the earthquake process and give updates on various aspects of seismic monitoring in Italy; vi) distribution of books and brochures. Surprisingly, more than 3000 visitors came to visit INGV

  7. Comparing predicted and observed ground motions from subduction earthquakes in the Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Douglas, John; Mohais, Rosemarie

    2009-10-01

    This brief article presents a quantitative analysis of the ability of eight published empirical ground-motion prediction equations (GMPEs) for subduction earthquakes (interface and intraslab) to estimate observed earthquake ground motions on the islands of the Lesser Antilles (specifically Guadeloupe, Martinique, Trinidad, and Dominica). In total, over 300 records from 22 earthquakes from various seismic networks are used within the analysis. It is found that most of the GMPEs tested perform poorly, which is mainly due to a larger variability in the observed ground motions than predicted by the GMPEs, although two recent GMPEs derived using Japanese strong-motion data provide reasonably good predictions. Analyzing separately the interface and intraslab events does not significant modify the results. Therefore, it is concluded that seismic hazard assessments for this region should use a variety of GMPEs in order to capture this large epistemic uncertainty in earthquake ground-motion prediction for the Lesser Antilles.

  8. Predicted Liquefaction in the Greater Oakland and Northern Santa Clara Valley Areas for a Repeat of the 1868 Hayward Earthquake

    NASA Astrophysics Data System (ADS)

    Holzer, T. L.; Noce, T. E.; Bennett, M. J.

    2008-12-01

    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by latest Holocene alluvial fan levee deposits where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906. The liquefaction scenario maps were created with ArcGIS ModelBuilder. Peak ground accelerations first were computed with the new Boore and Atkinson NGA attenuation relation (2008, Earthquake Spectra, 24:1, p. 99-138), using VS30 to account for local site response. Spatial liquefaction probabilities were then estimated using the predicted ground motions

  9. Long-Term Prediction of Large Earthquakes: When Does Quasi-Periodic Behavior Occur?

    NASA Astrophysics Data System (ADS)

    Sykes, L. R.

    2003-12-01

    every great earthquake. The 2002 Working Group on large earthquakes in the San Francisco Bay region followed Ellsworth et al. (1999) in adopting much larger values of CV for several critical fault segments and underestimating their likelihood of rupture in the next 30 years. The Working Group also gives considerable weight to a Poisson model, which is in conflict with both renewal processes involving slow stress accumulation and with values of CV near 0.2. The failure of the Parkfield prediction has greatly influenced views in the U.S. about long-term forecasts. The model of the repeated breaking of a single asperity is incorrect since past Parkfield shocks of about magnitude 6 likely did not rupture the same part of the San Andreas fault.

  10. Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Pisarenko, V. F.; Sornette, A.; Sornette, D.; Rodkin, M. V.

    2014-08-01

    The present work is a continuation and improvement of the method suggested in P isarenko et al. (Pure Appl Geophys 165:1-42, 2008) for the statistical estimation of the tail of the distribution of earthquake sizes. The chief innovation is to combine the two main limit theorems of Extreme Value Theory (EVT) that allow us to derive the distribution of T-maxima (maximum magnitude occurring in sequential time intervals of duration T) for arbitrary T. This distribution enables one to derive any desired statistical characteristic of the future T-maximum. We propose a method for the estimation of the unknown parameters involved in the two limit theorems corresponding to the Generalized Extreme Value distribution (GEV) and to the Generalized Pareto Distribution (GPD). We establish the direct relations between the parameters of these distributions, which permit to evaluate the distribution of the T-maxima for arbitrary T. The duality between the GEV and GPD provides a new way to check the consistency of the estimation of the tail characteristics of the distribution of earthquake magnitudes for earthquake occurring over an arbitrary time interval. We develop several procedures and check points to decrease the scatter of the estimates and to verify their consistency. We test our full procedure on the global Harvard catalog (1977-2006) and on the Fennoscandia catalog (1900-2005). For the global catalog, we obtain the following estimates: = 9.53 ± 0.52 and = 9.21 ± 0.20. For Fennoscandia, we obtain = 5.76 ± 0.165 and = 5.44 ± 0.073. The estimates of all related parameters for the GEV and GPD, including the most important form parameter, are also provided. We demonstrate again the absence of robustness of the generally accepted parameter characterizing the tail of the magnitude-frequency law, the maximum possible magnitude M max, and study the more stable parameter Q T ( q), defined as the q-quantile of the distribution of T-maxima on a future interval of duration T.

  11. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  12. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  13. Aftershock activity of a M2 earthquake in a deep South African gold mine - spatial distribution and magnitude-frequency relation

    NASA Astrophysics Data System (ADS)

    Naoi, M. M.; Nakatani, M.; Kwiatek, G.; Plenkers, K.; Yabe, Y.

    2009-12-01

    An earthquake of M 2.1 occurred on December 27, 2007 in a deep South African gold mine (Yabe et al., 2008). It occurred within a sensitive high frequency seismic network consisting of eight high frequency AE sensors (up to 200 kHz) and a tri-axial accelerometer (up to 25 kHz). Within 150 hours following the earthquake, our AE network detected more than 20,000 events within 250 m of the center of the network. We have located aftershocks assuming homogeneous medium (Fig. a), based on their manually-picked arrival times of P and S waves. This aftershock seismicity can be clearly separated into five clusters. Each sequence obeyed Omori ‘s law and had the similar p-value (p ~ 1.3). The cluster A in Fig. a is very planar. More than 90 % aftershocks of the cluster are within a 3 m thickness while the cluster has a lateral dimension of ~100m x 100m. The density of aftershocks normal to the planar cluster follows an exponential distribution with about 0.6 m characteristic length. The distribution of the cluster A coincides with one of the nodal planes of the main shock estimated by the waveform inversion. Hence, cluster A is thought to delineate the main rupture. Clusters B to E coincide with the edge of mining cavity or background seismicity recognized before the mainshock. Remarkable off-fault aftershock activities occurred only in these four areas. We have determined moment magnitude (Mw) of 17,350 earthquakes using AE waveforms (Mw > -5.4). As AE sensors have complex frequency characteristics, we use the amplitude in a narrow frequency band (2 - 4 kHz). Directivity of the AE sensor (~20 db) is corrected by comparison with the accelerometer record. Absolute magnitude has been given by an empirical relationship between AE amplitude and Mw determined by the spectral level of the accelerometer record. Mw determination from accelerometer record was done for ~ 0.5 % of aftershocks detected by AE sensors. Moment magnitudes of these selected earthquakes resulted in values

  14. Positive feedback, memory, and the predictability of earthquakes.

    PubMed

    Sammis, C G; Sornette, D

    2002-02-19

    We review the "critical point" concept for large earthquakes and enlarge it in the framework of so-called "finite-time singularities." The singular behavior associated with accelerated seismic release is shown to result from a positive feedback of the seismic activity on its release rate. The most important mechanisms for such positive feedback are presented. We solve analytically a simple model of geometrical positive feedback in which the stress shadow cast by the last large earthquake is progressively fragmented by the increasing tectonic stress.

  15. Positive feedback, memory, and the predictability of earthquakes

    PubMed Central

    Sammis, C. G.; Sornette, D.

    2002-01-01

    We review the “critical point” concept for large earthquakes and enlarge it in the framework of so-called “finite-time singularities.” The singular behavior associated with accelerated seismic release is shown to result from a positive feedback of the seismic activity on its release rate. The most important mechanisms for such positive feedback are presented. We solve analytically a simple model of geometrical positive feedback in which the stress shadow cast by the last large earthquake is progressively fragmented by the increasing tectonic stress. PMID:11875202

  16. The Parkfield earthquake prediction of October 1992; the emergency services response

    USGS Publications Warehouse

    Andrews, R.

    1992-01-01

    The science of earthquake prediction is interesting and worthy of support. In many respects the ultimate payoff of earthquake prediction or earthquake forecasting is how the information can be used to enhance public safety and public preparedness. This is a particularly important issue here in California where we have such a high level of seismic risk historically, and currently, as a consequence of activity in 1989 in the San Francisco Bay Area, in Humboldt County in April of this year (1992), and in southern California in the Landers-Big Bear area in late June of this year (1992). We are currently very concerned about the possibility of a major earthquake, one or more, happening close to one of our metropolitan areas. Within that context, the Parkfield experiment becomes very important. 

  17. Tsunami hazard warning and risk prediction based on inaccurate earthquake source parameters

    NASA Astrophysics Data System (ADS)

    Goda, Katsuichiro; Abilova, Kamilla

    2016-02-01

    This study investigates the issues related to underestimation of the earthquake source parameters in the context of tsunami early warning and tsunami risk assessment. The magnitude of a very large event may be underestimated significantly during the early stage of the disaster, resulting in the issuance of incorrect tsunami warnings. Tsunamigenic events in the Tohoku region of Japan, where the 2011 tsunami occurred, are focused on as a case study to illustrate the significance of the problems. The effects of biases in the estimated earthquake magnitude on tsunami loss are investigated using a rigorous probabilistic tsunami loss calculation tool that can be applied to a range of earthquake magnitudes by accounting for uncertainties of earthquake source parameters (e.g., geometry, mean slip, and spatial slip distribution). The quantitative tsunami loss results provide valuable insights regarding the importance of deriving accurate seismic information as well as the potential biases of the anticipated tsunami consequences. Finally, the usefulness of rigorous tsunami risk assessment is discussed in defining critical hazard scenarios based on the potential consequences due to tsunami disasters.

  18. Tsunami hazard warning and risk prediction based on inaccurate earthquake source parameters

    NASA Astrophysics Data System (ADS)

    Goda, K.; Abilova, K.

    2015-12-01

    This study investigates the issues related to underestimation of the earthquake source parameters in the context of tsunami early warning and tsunami risk assessment. The magnitude of a very large event may be underestimated significantly during the early stage of the disaster, resulting in the issuance of incorrect tsunami warnings. Tsunamigenic events in the Tohoku region of Japan, where the 2011 tsunami occurred, are focused on as a case study to illustrate the significance of the problems. The effects of biases in the estimated earthquake magnitude on tsunami loss are investigated using a rigorous probabilistic tsunami loss calculation tool that can be applied to a range of earthquake magnitudes by accounting for uncertainties of earthquake source parameters (e.g. geometry, mean slip, and spatial slip distribution). The quantitative tsunami loss results provide with valuable insights regarding the importance of deriving accurate seismic information as well as the potential biases of the anticipated tsunami consequences. Finally, usefulness of rigorous tsunami risk assessment is discussed in defining critical hazard scenarios based on the potential consequences due to tsunami disasters.

  19. Coseismic Subsidence in the 1700 Great Cascadia Earthquake: Coastal Geological Estimates Versus the Predictions of Elastic Dislocation Models

    NASA Astrophysics Data System (ADS)

    Leonard, L. J.; Hyndman, R. D.; Mazzotti, S.

    2002-12-01

    Coastal estuaries from N. California to central Vancouver Island preserve evidence of the subsidence that has occurred in Holocene megathrust earthquakes at the Cascadia subduction zone (CSZ). Seismic hazard assessments in Cascadia are primarily based on the rupture area of 3-D dislocation models constrained by geodetic data. It is important to test the model by comparing predicted coseismic subsidence with that estimated in coastal marsh studies. Coseismic subsidence causes the burial of soils that are preserved as peat layers in the tidal-marsh stratigraphy. The most recent (1700) event is commonly marked by a peat layer overlain by intertidal mud, often with an intervening sand layer inferred as a tsunami deposit. Estimates of the amount of coseismic subsidence are made using two methods. (1) Contrasts in lithology, macrofossil content, and microfossil assemblages allow elevation changes to be deduced via modern marsh calibrations. (2) Measurements of the subsurface depth of the buried soil, corrected for eustatic sea level rise and interseismic uplift (assessed using a geodetically-constrained elastic dislocation model), provide independent estimates. Further corrections may include postglacial rebound and local tectonics. An elastic dislocation model is used to predict the expected coseismic subsidence, for a magnitude 9 earthquake (assuming 16 m uniform rupture), at the locations of geological subsidence estimates for the 1700 event. From preliminary comparisons, the correlation is remarkably good, corroborating the dislocation model rupture. The model produces a similar N-S trend of coastal subsidence, and for parts of the margin, e.g. N. Oregon and S. Washington, subsidence of similar magnitude (+/- ~ 0.25 m). A significant discrepancy (up to ~ 1.0 m) exists elsewhere, e.g. N. California, S. Oregon, and central Vancouver Island. The discrepancy may arise from measurement uncertainty, uncertainty in the elastic model, the assumption of elastic rather than

  20. Simulation of broadband ground motion including nonlinear soil effects for a magnitude 6.5 earthquake on the Seattle fault, Seattle, Washington

    USGS Publications Warehouse

    Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.

    2002-01-01

    The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.

  1. Is Earthquake Prediction Possible from Short-Term Foreshocks?

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Gerassimos; Avlonitis, Markos; Di Fiore, Boris; Minadakis, George

    2015-04-01

    Foreshocks preceding mainshocks in the short-term, ranging from minutes to a few months prior the mainshock, have been known from several decades ago. Understanding the generation mechanisms of foreshocks was supported by seismicity observations and statistics, laboratory experiments, theoretical considerations and simulation results. However, important issues remain open. For example, (1) How foreshocks are defined? (2) Why only some mainshocks are preceded by foreshocks but others do not? (2) Is the mainshock size dependent on some attributes of the foreshock sequence? (3) Is that possible to discriminate foreshocks from other seismicity styles (e.g. swarms, aftershocks)? To approach possible replies to these issues we reviewed about 400 papers, reports, books and other documents referring to foreshocks as well as to relevant laboratory experiments. We found that different foreshock definitions are used by different authors. We found also that the ratio of mainshocks preceded by foreshocks increases with the increase of monitoring capabilities and that foreshock activity is dependent on source mechanical properties and favoured by material heterogeneity. Also, the mainshock size does not depend on the largest foreshock size but rather by the foreshock area. Seismicity statistics may account for an effective discrimination of foreshocks from other seismicity styles since during foreshock activities the seismicity rate increases with the inverse of time and, at the same, the b-value of the G-R relationship as a rule drops significantly. Our literature survey showed that only the last years the seismicity catalogs organized in some well monitored areas are adequately complete to search for foreshock activities. Therefore, we investigated for a set of "good foreshock examples" covering a wide range of mainshock magnitudes from 4.5 to 9 in Japan (Tohoku 2011), S. California, Italy (including L' Aquila 2009) and Greece. The good examples used indicate that foreshocks

  2. Geometrical Scaling of the Magnitude Frequency Statistics of Fluid Injection Induced Earthquakes and Implications for Assessment and Mitigation of Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Dinske, C.; Shapiro, S. A.

    2015-12-01

    To study the influence of size and geometry of hydraulically perturbed rock volumes on the magnitude statistics of induced events, we compare b value and seismogenic index estimates derived from different algorithms. First, we use standard Gutenberg-Richter approaches like least square fit and maximum likelihood technique. Second, we apply the lower bound probability fit (Shapiro et al., 2013, JGR, doi:10.1002/jgrb.50264) which takes the finiteness of the perturbed volume into account. The different estimates systematically deviate from each other and the deviations are larger for smaller perturbed rock volumes. It means that the frequency-magnitude distribution is most affected for small injection volume and short injection time resulting in a high apparent b value. In contrast, the specific magnitude value, the quotient of seismogenic index and b value (Shapiro et al., 2013, JGR, doi:10.1002/jgrb.50264), appears to be a unique seismotectonic parameter of a reservoir location. Our results confirm that it is independent of the size of perturbed rock volume. The specific magnitude is hence an indicator of the magnitudes that one can expect for a given injection. Several performance tests to forecast the magnitude frequencies of induced events show that the seismogenic index model provides reliable predictions which confirm its applicability as a forecast tool, particularly, if applied in real-time monitoring. The specific magnitude model can be used to predict an asymptotical upper limit of probable frequency-magnitude distributions of induced events. We also conclude from our analysis that the physical process of pore pressure diffusion for the event triggering and the scaling of their frequency-magnitude distribution by the size of perturbed rock volume well depicts the presented relation between upper bound of maximum seismic moment and injected fluid volume (McGarr, 2014, JGR, doi:10.1002/2013JB010597), particularly, if nonlinear effects in the diffusion process

  3. Comparison of ground motions estimated from prediction equations and from observed damage during the M = 4.6 1983 Liège earthquake (Belgium)

    NASA Astrophysics Data System (ADS)

    García Moreno, D.; Camelbeeck, T.

    2013-08-01

    On 8 November 1983 an earthquake of magnitude 4.6 damaged more than 16 000 buildings in the region of Liège (Belgium). The extraordinary damage produced by this earthquake, considering its moderate magnitude, is extremely well documented, giving the opportunity to compare the consequences of a recent moderate earthquake in a typical old city of Western Europe with scenarios obtained by combining strong ground motions and vulnerability modelling. The present study compares 0.3 s spectral accelerations estimated from ground motion prediction equations typically used in Western Europe with those obtained locally by applying the statistical distribution of damaged masonry buildings to two fragility curves, one derived from the HAZUS programme of FEMA (FEMA, 1999) and another developed for high-vulnerability buildings by Lang and Bachmann (2004), and to a method proposed by Faccioli et al. (1999) relating the seismic vulnerability of buildings to the damage and ground motions. The results of this comparison reveal good agreement between maxima spectral accelerations calculated from these vulnerability and fragility curves and those predicted from attenuation law equations, suggesting peak ground accelerations for the epicentral area of the 1983 earthquake of 0.13-0.20 g (g: gravitational acceleration).

  4. An Update on the Activities of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Liukis, M.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Werner, M. J.; Jordan, T. H.

    2013-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecast experiments. There are now CSEP testing centers in California, New Zealand, Japan, and Europe, and 364 models are under evaluation. In this presentation, we describe how the testing center hosted by the Southern California Earthquake Center (SCEC) has evolved to meet CSEP objectives and share our experiences in operating the center. The SCEC testing center has been operational since September 1, 2007, and currently hosts 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the western Pacific, and a global testing region. We are currently working to reduce testing latency and to develop procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a Department of Homeland Security project to register and test external forecast procedures from experts outside seismology. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss how we apply CSEP infrastructure to geodetic transient detection and the evaluation of ShakeAlert system for earthquake early warning (EEW), and how CSEP procedures are being adopted for intensity prediction and ground motion prediction experiments. cseptesting.org

  5. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  6. Resting EEG in Alpha and Beta Bands Predicts Individual Differences in Attentional Blink Magnitude

    ERIC Educational Resources Information Center

    MacLean, Mary H.; Arnell, Karen M.; Cote, Kimberly A.

    2012-01-01

    Accuracy for a second target (T2) is reduced when it is presented within 500 ms of a first target (T1) in a rapid serial visual presentation (RSVP)--an attentional blink (AB). There are reliable individual differences in the magnitude of the AB. Recent evidence has shown that the attentional approach that an individual typically adopts during a…

  7. Can an earthquake prediction and warning system be developed?

    USGS Publications Warehouse

    N.N, Ambraseys

    1990-01-01

    Over the last 20 years, natural disasters have killed nearly 3 million people and disrupted the lives of over 800 million others. In 2 years there were more than 50 serious natural disasters, including landslides in Italy, France, and Colombia; a typhoon in Korea; wildfires in China and the United States; a windstorm in England; grasshopper plagues in Africa's horn and the Sahel; tornadoes in Canada; devastating earthquakes in Soviet Armenia and Tadzhikstand; infestations in Africa; landslides in Brazil; and tornadoes in the United States 

  8. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  9. Recent Developments within the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2014-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecast experiments. There are now CSEP testing centers in California, New Zealand, Japan, and Europe, with 430 models under evaluation. In this presentation, we describe how the Southern California Earthquake Center (SCEC) testing center has evolved to meet CSEP objectives and we share our experiences in operating the center. The SCEC testing center has been operational since September 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and a global testing region. We have reduced testing latency, implemented prototype evaluation of M8 forecasts and currently develop procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a Department of Homeland Security project to register and test external forecast procedures from experts outside seismology. Retrospective experiment for the 2010 Darfield earthquake sequence formed an important addition to the CSEP activities where the predictive skills of physics-based and statistical forecasting models are compared. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and the evaluation of ShakeAlert system for earthquake early warning (EEW), and how CSEP procedures are being adopted for intensity prediction and ground motion prediction experiments.

  10. Magnitudes of selected stellar occultation candidates for Pluto and other planets, with new predictions for Mars and Jupiter

    NASA Technical Reports Server (NTRS)

    Sybert, C. B.; Bosh, A. S.; Sauter, L. M.; Elliot, J. L.; Wasserman, L. H.

    1992-01-01

    Occultation predictions for the planets Mars and Jupiter are presented along with BVRI magnitudes of 45 occultation candidates for Mars, Jupiter, Saturn, Uranus, and Pluto. Observers can use these magnitudes to plan observations of occultation events. The optical depth of the Jovian ring can be probed by a nearly central occultation on 1992 July 8. Mars occults an unusually red star in early 1993, and the occultations for Pluto involving the brightest candidates would possibly occur in the spring of 1992 and the fall of 1993.

  11. Confirmation of linear system theory prediction: Changes in Herrnstein's k as a function of changes in reinforcer magnitude.

    PubMed

    McDowell, J J; Wood, H M

    1984-03-01

    Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25 cent to 35.0 cent per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding.

  12. Confirmation of linear system theory prediction: Changes in Herrnstein's k as a function of changes in reinforcer magnitude

    PubMed Central

    McDowell, J. J; Wood, Helena M.

    1984-01-01

    Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25¢ to 35.0¢ per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding. PMID:16812366

  13. The Earthquake Prediction Experiment on the Basis of the Jet Stream's Precursor

    NASA Astrophysics Data System (ADS)

    Wu, H. C.; Tikhonov, I. N.

    2014-12-01

    Simultaneous analysis of the jet stream maps and EQ data of M > 6.0 have been made. 58 cases of EQ occurred in 2006-2010 were studied. It has been found that interruption or velocity flow lines cross above an epicenter of EQ take place 1-70 days prior to event. The duration was 6-12 hours. The assumption is that jet stream will go up or down near an epicenter. In 45 cases the distance between epicenters and jet stream's precursor does not exceed 90 km. The forecast during 30 days before the EQ was 66.1 % (Wu and Tikhonov, 2014). This technique has been used to predict the strong EQ and pre-registered on the website (for example, the 23 October 2011, M 7.2 EQ (Turkey); the 20 May 2012, M 6.1 EQ (Italy); the 16 April 2013, M 7.8 EQ (Iran); the 12 November 2013, M 6.6 EQ (Russia); the 03 March 2014, M 6.7 Ryukyu EQ (Japan); the 21 July 2014, M 6.2 Kuril EQ). We obtain satisfactory accuracy of the epicenter location. As well we define the short alarm period. That's the positive aspects of forecast. However, estimates of magnitude contain a big uncertainty. Reference Wu, H.C., Tikhonov, I.N., 2014. Jet streams anomalies as possible short-term precursors of earthquakes with M > 6.0. Research in Geophysics, Special Issue on Earthquake Precursors. Vol. 4. No 1. doi:10.4081/rg.2014.4939. The precursor of M9.0 Japan EQ on 2011/03/11(fig1). A. M6.1 Italy EQ (2012/05/20, 44.80 N, 11.19 E, H = 5.1 km) Prediction: 2012/03/20~2012/04/20 (45.6 N, 10.5 E), M > 5.5(fig2) http://ireport.cnn.com/docs/DOC-764800 B. M7.8 Iran EQ (2013/04/16, 28.11 N, 62.05 E, H = 82.0 km) Prediction: 2013/01/14~2013/02/04 (28.0 N, 61.3 E) M > 6.0(fig3) http://ireport.cnn.com/docs/DOC-910919 C. M6.6 Russia EQ (2013/11/12, 54.68 N, 162.29 E, H = 47.2 km). Prediction: 2013/10/27~2013/11/13 (56.0 N, 162.9 E) M > 5.5 http://ireport.cnn.com/docs/DOC-1053599 D. M6.7 Japan EQ (2014/03/03, 27.41 N, 127.34 E, H = 111.2 km). Prediction: 2013/12/02 ~2014/01/15 (26.7 N, 128.1 E) M > 6.5(fig4) http

  14. How to predict Italy L'Aquila M6.3 earthquake

    NASA Astrophysics Data System (ADS)

    Guo, Guangmeng

    2016-04-01

    According to the satellite cloud anomaly appeared over eastern Italy on 21-23 April 2012, we predicted the M6.0 quake occurred in north Italy successfully. Here checked the satellite images in 2011-2013 in Italy, and 21 cloud anomalies were found. Their possible correlation with earthquakes bigger than M4.7 which located in Italy main fault systems was statistically examined by assuming various lead times. The result shows that when the leading time interval is set to 23≤ΔT≤45 days, 8 of the 10 quakes were preceded by cloud anomalies. Poisson random test shows that AAR (anomaly appearance rate) and EOR (EQ occurrence rate) is much higher than the values by chance. This study proved the relation between cloud anomaly and earthquake in Italy. With this method, we found that L'Aquila earthquake can also be predicted according to cloud anomaly.

  15. Predicted liquefaction of East Bay fills during a repeat of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Blair, J.L.; Noce, T.E.; Bennett, M.J.

    2006-01-01

    Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M ??? 7.8 earthquake on the northern San Andreas Fault. ?? 2006, Earthquake Engineering Research Institute.

  16. The Color-Magnitude Relation of Cluster Galaxies: Observations and Model Predictions

    NASA Astrophysics Data System (ADS)

    Jiménez, N.; Smith Castelli, A. V.; Cora, S. A.; Bassino, L. P.

    We investigate the origin of the color-magnitude relation (CMR) observed in cluster galaxies by using a combination of cosmological N-body/SPH simulations of galaxy clusters, and a semi-analaytic model of galaxy formation (Lagos, Cora & Padilla 2008). Simulated results are compared with the photometric properties of early-type galaxies in the Antlia cluster (Smith Castelli et al. 2008). The good agreement obtained between observations and simulations allows us to use the information provided by the model for unveiling the physical processes that yield the tigh observed CMR.

  17. New vertical geodesy. [VLBI measurements for earthquake prediction

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. H.

    1976-01-01

    The paper contains a review of the theoretical difference between orthometric heights and heights labeled geometric which are determined through use of an extraterrestrial frame of reference. The theory is supplemented with examples which portray very long baseline interferometry as a measuring system that will provide estimates of vertical crustal motion which are radically improved in comparison with those obtained from analysis of repeated geodetic levelings. The example of the San Fernando earthquake of 1971 is used to show how much estimates of orthometric and geometric height change might differ. A comment by another author is appended which takes issue with some of the conclusions of this paper. In particular, an attempt is made in the comment to rebut the conclusion that geodetic leveling is less reliable than VLBI measurements for determining relative elevation change of points separated by more than 56 km.

  18. Ground Motion Prediction of Subduction Earthquakes using the Onshore-Offshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2014-12-01

    Seismic waves produced by earthquakes already caused plenty of damages all around the world and are still a real threat to human beings. To reduce seismic risk associated with future earthquakes, accurate ground motion predictions are required, especially for cities located atop sedimentary basins that can trap and amplify these seismic waves. We focus this study on long-period ground motions produced by subduction earthquakes in Japan which have the potential to damage large-scale structures, such as high-rise buildings, bridges, and oil storage tanks. We extracted the impulse response functions from the ambient seismic field recorded by two stations using one as a virtual source, without any preprocessing. This method allows to recover the reliable phases and relative, rather than absolute, amplitudes. To retrieve corresponding Green's functions, the impulse response amplitudes need to be calibrated using observational records of an earthquake which happened close to the virtual source. We show that Green's functions can be extracted between offshore submarine cable-based sea-bottom seismographic observation systems deployed by JMA located atop subduction zones and on-land NIED/Hi-net stations. In contrast with physics-based simulations, this approach has the great advantage to predict ground motions of moderate earthquakes (Mw ~5) at long-periods in highly populated sedimentary basin without the need of any external information about the velocity structure.

  19. Spectral models for ground motion prediction in the L'Aquila region (central Italy): evidence for stress-drop dependence on magnitude and depth

    NASA Astrophysics Data System (ADS)

    Pacor, F.; Spallarossa, D.; Oth, A.; Luzi, L.; Puglia, R.; Cantore, L.; Mercuri, A.; D'Amico, M.; Bindi, D.

    2016-02-01

    between seismic moment and local magnitude that improves the existing ones and extends the validity range to 3.0-5.8. We find a significant stress drop increase with seismic moment for events with Mw larger than 3.75, with so-called scaling parameter ε close to 1.5. We also observe that the overall offset of the stress-drop scaling is controlled by earthquake depth. We evaluate the performance of the proposed parametric models through the residual analysis of the Fourier spectra in the frequency range 0.5-25 Hz. The results show that the considered stress-drop scaling with magnitude and depth reduces, on average, the standard deviation by 18 per cent with respect to a constant stress-drop model. The overall quality of fit (standard deviation between 0.20 and 0.27, in the frequency range 1-20 Hz) indicates that the spectral model calibrated in this study can be used to predict ground motion in the L'Aquila region.

  20. Earthquake prediction in the Soviet Union; an interview with I. L. Nersesov

    USGS Publications Warehouse

    Spall, H.

    1980-01-01

    Dr. I. L. Nersesov is a seismologist with the Institute of Physics of the Earth, Academy of Sciences of the U.S.S.R., Moscow. He is one of the leaders in the Soviet national program of earthquake prediction

  1. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  2. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  3. Individual preparedness and mitigation actions for a predicted earthquake in Istanbul.

    PubMed

    Tekeli-Yeşil, Sıdıka; Dedeoğlu, Necati; Tanner, Marcel; Braun-Fahrlaender, Charlotte; Obrist, Birgit

    2010-10-01

    This study investigated the process of taking action to mitigate damage and prepare for an earthquake at the individual level. Its specific aim was to identify the factors that promote or inhibit individuals in this process. The study was conducted in Istanbul, Turkey--where an earthquake is expected soon--in May and June 2006 using qualitative methods. Within our conceptual framework, three different patterns emerged among the study subjects. Outcome expectancy, helplessness, a low socioeconomic level, a culture of negligence, a lack of trust, onset time/poor predictability, and normalisation bias inhibit individuals in this process, while location, direct personal experience, a higher education level, and social interaction promote them. Drawing on these findings, the paper details key points for better disaster communication, including whom to mobilise to reach target populations, such as individuals with direct earthquake experience and women.

  4. Change in failure stress on the southern San Andreas fault system caused by the 1992 magnitude = 7.4 Landers earthquake

    USGS Publications Warehouse

    Stein, R.S.; King, G.C.P.; Lin, J.

    1992-01-01

    The 28 June Landers earthquake brought the San Andreas fault significantly closer to failure near San Bernardino, a site that has not sustained a large shock since 1812. Stress also increased on the San Jacinto fault near San Bernardino and on the San Andreas fault southeast of Palm Springs. Unless creep or moderate earthquakes relieve these stress changes, the next great earthquake on the southern San Andreas fault is likely to be advanced by one to two decades. In contrast, stress on the San Andreas north of Los Angeles dropped, potentially delaying the next great earthquake there by 2 to 10 years.

  5. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  6. Predicted Attenuation Relation and Observed Ground Motion of Gorkha Nepal Earthquake of 25 April 2015

    NASA Astrophysics Data System (ADS)

    Singh, R. P.; Ahmad, R.

    2015-12-01

    A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

  7. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

  8. Holocene behavior of the Brigham City segment: implications for forecasting the next large-magnitude earthquake on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Personius, Stephen F.; DuRoss, Christopher B.; Crone, Anthony J.

    2012-01-01

    The Brigham City segment (BCS), the northernmost Holocene‐active segment of the Wasatch fault zone (WFZ), is considered a likely location for the next big earthquake in northern Utah. We refine the timing of the last four surface‐rupturing (~Mw 7) earthquakes at several sites near Brigham City (BE1, 2430±250; BE2, 3490±180; BE3, 4510±530; and BE4, 5610±650 cal yr B.P.) and calculate mean recurrence intervals (1060–1500  yr) that are greatly exceeded by the elapsed time (~2500  yr) since the most recent surface‐rupturing earthquake (MRE). An additional rupture observed at the Pearsons Canyon site (PC1, 1240±50 cal yr B.P.) near the southern segment boundary is probably spillover rupture from a large earthquake on the adjacent Weber segment. Our seismic moment calculations show that the PC1 rupture reduced accumulated moment on the BCS about 22%, a value that may have been enough to postpone the next large earthquake. However, our calculations suggest that the segment currently has accumulated more than twice the moment accumulated in the three previous earthquake cycles, so we suspect that additional interactions with the adjacent Weber segment contributed to the long elapse time since the MRE on the BCS. Our moment calculations indicate that the next earthquake is not only overdue, but could be larger than the previous four earthquakes. Displacement data show higher rates of latest Quaternary slip (~1.3  mm/yr) along the southern two‐thirds of the segment. The northern third likely has experienced fewer or smaller ruptures, which suggests to us that most earthquakes initiate at the southern segment boundary.

  9. Physically-based modelling of the competition between surface uplift and erosion caused by earthquakes and earthquake sequences.

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2016-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a

  10. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  11. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-01

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  12. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  13. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  14. New fault picture points toward San Francisco Bay area earthquakes

    SciTech Connect

    Kerr, R.A.

    1989-01-01

    Recent earthquakes along the Calaveras fault in California appear to form a pattern of northward movement along the fault, indicating the possibility of earthquake prediction. Three researchers have analyzed historic microearthquake data for this fault. The paper describes their findings and explains their prediction of a 5.5 magnitude or larger earthquake along a 4-kilometer section of the fault near its northern end.

  15. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  16. The nonlinear predictability of the electrotelluric field variations data analyzed with support vector machines as an earthquake precursor.

    PubMed

    Ifantis, A; Papadimitriou, S

    2003-10-01

    This work investigates the nonlinear predictability of the Electro Telluric Field (ETF) variations data in order to develop new intelligent tools for the difficult task of earthquake prediction. Support Vector Machines trained on a signal window have been used to predict the next sample. We observe a significant increase at this short-term unpredictability of the ETF signal at about two weeks time period before the major earthquakes that took place in regions near the recording devices. The unpredictability increase can be attributed to a quick time variation of the dynamics that produce the ETF signal due to the earthquake generation process. Thus, this increase can be taken into advantage for signaling for an increased possibility of a large earthquake within the next few days in the neighboring region of the recording station.

  17. Risk communication on earthquake prediction studies: Possible pitfalls of science communication

    NASA Astrophysics Data System (ADS)

    Oki, S.; Koketsu, K.

    2012-04-01

    The ANSA web news titled "'No L'Aquila quake risk' experts probed in Italy in June 2010" gave a shock to the Japanese seismological community. For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 on 30th March, the government held the Major Risks Committee, which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. According to this ANSA news, the committee did not insist on the risk of damaging earthquake at the press conference held after the committee. Six days later, however, a magnitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors started on the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee in the previous week. Lessons from this issue are of significant importance. Science communication is now in currency, and more efforts are made to reach out to the public and policy makers. But when we deal with disaster sciences, it contains a much bigger proportion of risk communication. A similar incident had happened with the outbreak of the BSE back in the late 1980's. Many of the measures taken according to the Southwood Committee are laudable, but for one - science back then could not show whether or not it was contagious to humans, and is written in the committee minutes that "it is unlikely to infect humans". If read thoroughly, it does refer to the risk, but since it had not been stressed, the government started a campaign saying that "UK beef is safe". In the presentation, we review the L'Aquila affair referring to our interviews to some of the committee members and the Civil Protection Department, and also introduce

  18. Risk Communication on Earthquake Prediction Studies -"No L'Aquila quake risk" experts probed in Italy in June 2010

    NASA Astrophysics Data System (ADS)

    Oki, S.; Koketsu, K.; Kuwabara, E.; Tomari, J.

    2010-12-01

    For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 earthquake on 30th March, the government held Major Risks Committee which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. At the press conference immediately after the committee, they reported that "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable." 6 days later, a magunitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors opened the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee the previous week. This issue becomes widely known to the seismological society especially after an email titled "Letter of Support for Italian Earthquake Scientists" from seismologists at the National Geophysics and Volcanology Institute (INGV) sent worldwide. It says that the L'Aquila Prosecutors office indicted of manslaughter the members of the Major Risks Committee and that the charges are for failing to provide a short term alarm to the population before the earthquake struck. It is true that there is no generalized method to predict earthquakes but failing the short term alarm is not the reason for the investigation of the scientists. The chief prosecutor stated that "the committee could have provided the people with better advice", and "it wasn't the case that they did not receive any warnings, because there had been tremors". The email also requests sign-on support for the open letter to the president of Italy from Earth sciences colleagues from all over the world and collected more than 5000 signatures

  19. Southern San Andreas Fault seismicity is consistent with the Gutenberg-Richter magnitude-frequency distribution

    USGS Publications Warehouse

    Page, Morgan T.; Felzer, Karen

    2015-01-01

    The magnitudes of any collection of earthquakes nucleating in a region are generally observed to follow the Gutenberg-Richter (G-R) distribution. On some major faults, however, paleoseismic rates are higher than a G-R extrapolation from the modern rate of small earthquakes would predict. This, along with other observations, led to formulation of the characteristic earthquake hypothesis, which holds that the rate of small to moderate earthquakes is permanently low on large faults relative to the large-earthquake rate (Wesnousky et al., 1983; Schwartz and Coppersmith, 1984). We examine the rate difference between recent small to moderate earthquakes on the southern San Andreas fault (SSAF) and the paleoseismic record, hypothesizing that the discrepancy can be explained as a rate change in time rather than a deviation from G-R statistics. We find that with reasonable assumptions, the rate changes necessary to bring the small and large earthquake rates into alignment agree with the size of rate changes seen in epidemic-type aftershock sequence (ETAS) modeling, where aftershock triggering of large earthquakes drives strong fluctuations in the seismicity rates for earthquakes of all magnitudes. The necessary rate changes are also comparable to rate changes observed for other faults worldwide. These results are consistent with paleoseismic observations of temporally clustered bursts of large earthquakes on the SSAF and the absence of M greater than or equal to 7 earthquakes on the SSAF since 1857.

  20. Underground Sounds. An Approach To Earthquake Prediction By Auditory S Eismology

    NASA Astrophysics Data System (ADS)

    Dombois, F.

    Assuming that earthquakes could be predicted there are possibly two reasons why a reliable prediction theory has not been achieved yet: either we have not collected the right kind of data or we are in the possession of the relevant data but do still not recognize the precursory patterns within the database. In the latter case clearly an alternative way of representing the data is needed, an alternative that reveals and displays precursive patterns so that they are to be recognized. In our talk we want to plead for audification as a technique of displaying data that delivers fine arguments to be of special interest for earthquake prediction research. The acoustic approach which we call "Auditory Seismology" is easy to accomplish: One compresses the time axis of seismological registrations by a factor of about 2.200 and then replays the data on a speaker so that the seismogr ams can be listened to. First tried in the 1960s several seismologists used this technique already to trace low amplitude signals in noisy records. But in our predecessing investigations we discovered that audification is capable of displaying many more aspects of seismic data with clarity: listening to earthquake sounds our ear recognizes the broadening of the signal due to distance between source and station; different source mechanisms such as those at mid ocean ridges and subduction zones show a clear difference of sound characteristics; the influence of site response phenomena expresses itself in changes of timbre etc. (cf. sound samples at http://www.gmd.de/auditory-seismology). Introducing Auditory Seismology to earthquake prediction research could be justified even by the uncommon way of representation alone which on the whole allows a new perspective of the data. But there are more arguments to it: The use of waveform data instead of calculated event catalogues preserves a lot of information. E. g. the arrival time and relative amplitude of incoming waves even from distant quakes which

  1. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  2. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  3. Accounts of damage from historical earthquakes in the northeastern Caribbean to aid in the determination of their location and intensity magnitudes

    USGS Publications Warehouse

    Flores, Claudia H.; ten Brink, Uri S.; Bakun, William H.

    2012-01-01

    Documentation of an event in the past depended on the population and political trends of the island, and the availability of historical documents is limited by the physical resource digitization schedule and by the copyright laws of each archive. Examples of documents accessed are governors' letters, newspapers, and other circulars published within the Caribbean, North America, and Western Europe. Key words were used to search for publications that contain eyewitness accounts of various large earthquakes. Finally, this catalog provides descriptions of damage to buildings used in previous studies for the estimation of moment intensity (MI) and location of significantly damaging or felt earthquakes in Hispaniola and in the northeastern Caribbean, all of which have been described in other studies.

  4. Focal mechanisms and moment magnitudes of micro-earthquakes in central Brazil by waveform inversion with quality assessment and inference of the local stress field

    NASA Astrophysics Data System (ADS)

    Carvalho, Juraci; Barros, Lucas Vieira; Zahradník, Jiří

    2016-11-01

    This paper documents an investigation on the use of full waveform inversion to retrieve focal mechanisms of 11 micro-earthquakes (Mw 0.8 to 1.4). The events represent aftershocks of a 5.0 mb earthquake that occurred on October 8, 2010 close to the city of Mara Rosa in the state of Goiás, Brazil. The main contribution of the work lies in demonstrating the feasibility of waveform inversion of such weak events. The inversion was made possible thanks to recordings available at 8 temporary seismic stations in epicentral distances of less than 8 km, at which waveforms can be successfully modeled at relatively high frequencies (1.5-2.0 Hz). On average, the fault-plane solutions obtained are in agreement with a composite focal mechanism previously calculated from first-motion polarities. They also agree with the fault geometry inferred from precise relocation of the Mara Rosa aftershock sequence. The focal mechanisms provide an estimate of the local stress field. This paper serves as a pilot study for similar investigations in intraplate regions where the stress-field investigations are difficult due to rare earthquake occurrences, and where weak events must be studied with a detailed quality assessment.

  5. The blink reflex magnitude is continuously adjusted according to both current and predicted stimulus position with respect to the face.

    PubMed

    Wallwork, Sarah B; Talbot, Kerwin; Camfferman, Danny; Moseley, G L; Iannetti, G D

    2016-08-01

    The magnitude of the hand-blink reflex (HBR), a subcortical defensive reflex elicited by the electrical stimulation of the median nerve, is increased when the stimulated hand is close to the face ('far-near effect'). This enhancement occurs through a cortico-bulbar facilitation of the polysynaptic medullary pathways subserving the reflex. Here, in two experiments, we investigated the temporal characteristics of this facilitation, and its adjustment during voluntary movement of the stimulated hand. Given that individuals navigate in a fast changing environment, one would expect the cortico-bulbar modulation of this response to adjust rapidly, and as a function of the predicted spatial position of external threats. We observed two main results. First, the HBR modulation occurs without a temporal delay between when the hand has reached the stimulation position and when the stimulus happens (Experiments 1 and 2). Second, the voluntary movement of the hand interacts with the 'far-near effect': stimuli delivered when the hand is far from the face elicit an enhanced HBR if the hand is being moved towards the face, whereas stimuli delivered when the hand is near the face elicit an enhanced HBR regardless of the direction of the hand movement (Experiment 2). These results indicate that the top-down modulation of this subcortical defensive reflex occurs continuously, and takes into account both the current and the predicted position of potential threats with respect to the body. The continuous control of the excitability of subcortical reflex circuits ensures appropriate adjustment of defensive responses in a rapidly-changing sensory environment. PMID:27236372

  6. Earthquake Scaling, Simulation and Forecasting

    NASA Astrophysics Data System (ADS)

    Sachs, Michael Karl

    Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tohoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events. In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional

  7. [Comment on "Exaggerated claims about earthquake predictions: Analysis of NASA's method"] Pattern informatics and cellular seismology: A comparison of methods

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Tiampo, Kristy F.; Klein, William

    2007-06-01

    The recent article in Eos by Kafka and Ebel [2007] is a criticism of a NASA press release issued on 4 October 2004 describing an earthquake forecast (http://quakesim.jpl.nasa.gov/scorecard.html) based on a pattern informatics (PI) method [Rundle et al., 2002]. This 2002 forecast was a map indicating the probable locations of earthquakes having magnitude m>5.0 that would occur over the period of 1 January 2000 to 31 December 2009. Kafka and Ebel [2007] compare the Rundle et al. [2002] forecast to a retrospective analysis using a cellular seismology (CS) method. Here we analyze the performance of the Rundle et al. [2002] forecast using the first 15 of the m>5.0 earthquakes that occurred in the area covered by the forecasts.

  8. Paleoseismic investigations in the Santa Cruz mountains, California: Implications for recurrence of large-magnitude earthquakes on the San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Pantosti, D.; Okumura, K.; Powers, T. J.; Hamilton, J. C.

    1998-08-01

    Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600 s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600 s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600 s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that

  9. Generalized Free-Surface Effect and Random Vibration Theory: a new tool for computing moment magnitudes of small earthquakes using borehole data

    NASA Astrophysics Data System (ADS)

    Malagnini, Luca; Dreger, Douglas S.

    2016-07-01

    Although optimal, computing the moment tensor solution is not always a viable option for the calculation of the size of an earthquake, especially for small events (say, below Mw 2.0). Here we show an alternative approach to the calculation of the moment-rate spectra of small earthquakes, and thus of their scalar moments, that uses a network-based calibration of crustal wave propagation. The method works best when applied to a relatively small crustal volume containing both the seismic sources and the recording sites. In this study we present the calibration of the crustal volume monitored by the High-Resolution Seismic Network (HRSN), along the San Andreas Fault (SAF) at Parkfield. After the quantification of the attenuation parameters within the crustal volume under investigation, we proceed to the spectral correction of the observed Fourier amplitude spectra for the 100 largest events in our data set. Multiple estimates of seismic moment for the all events (1811 events total) are obtained by calculating the ratio of rms-averaged spectral quantities based on the peak values of the ground velocity in the time domain, as they are observed in narrowband-filtered time-series. The mathematical operations allowing the described spectral ratios are obtained from Random Vibration Theory (RVT). Due to the optimal conditions of the HRSN, in terms of signal-to-noise ratios, our network-based calibration allows the accurate calculation of seismic moments down to Mw < 0. However, because the HRSN is equipped only with borehole instruments, we define a frequency-dependent Generalized Free-Surface Effect (GFSE), to be used instead of the usual free-surface constant F = 2. Our spectral corrections at Parkfield need a different GFSE for each side of the SAF, which can be quantified by means of the analysis of synthetic seismograms. The importance of the GFSE of borehole instruments increases for decreasing earthquake's size because for smaller earthquakes the bandwidth available

  10. Comment on "The directionality of acoustic T-phase signals from small magnitude submarine earthquakes" [J. Acoust. Soc. Am. 119, 3669-3675 (2006)].

    PubMed

    Bohnenstiehl, Delwayne R

    2007-03-01

    In a recent paper, Chapman and Marrett [J. Acoust. Soc. Am. 119, 3669-3675 (2006)] examined the tertiary (T-) waves associated with three subduction-related earthquakes within the South Fiji Basin. In that paper it is argued that acoustic energy is radiated into the sound channel by downslope propagation along abyssal seamounts and ridges that lie distant to the epicenter. A reexamination of the travel-time constraints indicates that this interpretation is not well supported. Rather, the propagation model that is described would require the high-amplitude T-wave components to be sourced well to the east of the region identified, along a relatively flat-lying seafloor.

  11. Paleoseismic investigations in the Santa Cruz mountains, California: Implications for recurrence of large-magnitude earthquakes on the San Andreas fault

    USGS Publications Warehouse

    Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.

    1998-01-01

    Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that

  12. Earthquakes; July-August, 1978

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    Earthquake activity during this period was about normal. Deaths from earthquakes were reported from Greece and Guatemala. Three major earthquakes (magnitude 7.0-7.9) occurred in Taiwan, Chile, and Costa Rica. In the United States, the most significant earthquake was a magnitude 5.6 on August 13 in southern California. 

  13. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  14. Short-term foreshock activity and its value for the earthquake prediction

    NASA Astrophysics Data System (ADS)

    Orfanogiannaki, Katerina; Daskalaki, Elena; Minadakis, George; Papadopoulos, Gerasimos

    2014-05-01

    Seismicity often occurs in space-time clusters: swarms, short-term foreshocks, aftershocks. Swarms are space-time clusters that do not conclude with a mainshock. Earthquake statistics shows that in areas of good seismicity monitoring foreshocks precede sizeable (M5.5 or more) mainshocks at a rate of about half percent. Therefore, discrimination between foreshocks and swarms is of crucial importance with the aim to use foreshocks as a diagnostic of forthcoming strong mainshock in real-time conditions. We analyzed seismic sequences in Greece and Italy with the application of our algorithm FORMA (Foreshocks-Mainshock-Aftershocks) and discriminate between foreshocks and swarms based on the seismicity significant changes in the space-time-magnitude domains. We support that different statistical properties is a diagnostic of foreshocks (e.g. b-value drop) against swarms (b-value increase). A complementary approach is based on the development of Poisson Hidden Markov Models (PHMM's) which are introduced to model significant temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poissonian with rate depending only on the current state of the chain. Thus, PHMM allows a region to have varying seismicity rate. PHMM is a promising diagnostic since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. A third methodological experiment was performed based on the complex network theory. We found that the earthquake networks examined form a scale-free degree distribution. By computing their basic statistical measures, such as the Average Clustering Coefficient, Mean Path Length and Entropy, we found that they underline the strong space-time clustering of swarms, foreshocks and aftershocks but also their important differences. Therefore, network theory is an additional, promising tool to

  15. Forecasting magnitude, time, and location of aftershocks for aftershock hazard

    NASA Astrophysics Data System (ADS)

    Chen, K.; Tsai, Y.; Huang, M.; Chang, W.

    2011-12-01

    In this study we investigate the spatial and temporal seismicity parameters of the aftershock sequence accompanying the 17:47 20 September 1999 (UTC) 7.45 Chi-Chi earthquake Taiwan. Dividing the epicentral zone into north of the epicenter, at the epicenter, and south of the epicenter, it is found that immediately after the earthquake the area close by the epicenter had a lower value than both the northern and southern sections. This pattern suggests that at the time of the Chi-Chi earthquake, the area close by the epicenter remained prone to large magnitude aftershocks and strong shaking. However, with time the value increases. An increasing value indicates a reduced likelihood of large magnitude aftershocks. The study also shows that the value is higher at the southern section of the epicentral zone, indicating a faster rate of decay in this section. The primary purpose of this paper is to design a predictive model for forecasting the magnitude, time, and location of aftershocks to large earthquakes. The developed model is presented and applied to the 17:47 20 September 1999 7.45 Chi-Chi earthquake Taiwan, and the 09:32 5 November 2009 (UTC) Nantou 6.19, and 00:18 4 March 2010 (UTC) Jiashian 6.49 earthquake sequences. In addition, peak ground acceleration trends for the Nantou and Jiashian aftershock sequences are predicted and compared to actual trends. The results of the estimated peak ground acceleration are remarkably similar to calculations from recorded magnitudes in both trend and level. To improve the predictive skill of the model for occurrence time, we use an empirical relation to forecast the time of aftershocks. The empirical relation improves time prediction over that of random processes. The results will be of interest to seismic mitigation specialists and rescue crews. We apply also the parameters and empirical relation from Chi-Chi aftershocks of Taiwan to forecast aftershocks with magnitude M > 6.0 of 05:46 11 March 2011 (UTC) Tohoku 9

  16. Heart rate and heart rate variability assessment identifies individual differences in fear response magnitudes to earthquake, free fall, and air puff in mice.

    PubMed

    Liu, Jun; Wei, Wei; Kuang, Hui; Tsien, Joe Z; Zhao, Fang

    2014-01-01

    Fear behaviors and fear memories in rodents have been traditionally assessed by the amount of freezing upon the presentation of conditioned cues or unconditioned stimuli. However, many experiences, such as encountering earthquakes or accidental fall from tree branches, may produce long-lasting fear memories but are behaviorally difficult to measure using freezing parameters. Here, we have examined changes in heartbeat interval dynamics as physiological readout for assessing fearful reactions as mice were subjected to sudden air puff, free-fall drop inside a small elevator, and a laboratory-version earthquake. We showed that these fearful events rapidly increased heart rate (HR) with simultaneous reduction of heart rate variability (HRV). Cardiac changes can be further analyzed in details by measuring three distinct phases: namely, the rapid rising phase in HR, the maximum plateau phase during which HRV is greatly decreased, and the recovery phase during which HR gradually recovers to baseline values. We showed that durations of the maximum plateau phase and HR recovery speed were quite sensitive to habituation over repeated trials. Moreover, we have developed the fear resistance index based on specific cardiac response features. We demonstrated that the fear resistance index remained largely consistent across distinct fearful events in a given animal, thereby enabling us to compare and rank individual mouse's fear responsiveness among the group. Therefore, the fear resistance index described here can represent a useful parameter for measuring personality traits or individual differences in stress-susceptibility in both wild-type mice and post-traumatic stress disorder (PTSD) models.

  17. Heart Rate and Heart Rate Variability Assessment Identifies Individual Differences in Fear Response Magnitudes to Earthquake, Free Fall, and Air Puff in Mice

    PubMed Central

    Kuang, Hui; Tsien, Joe Z.; Zhao, Fang

    2014-01-01

    Fear behaviors and fear memories in rodents have been traditionally assessed by the amount of freezing upon the presentation of conditioned cues or unconditioned stimuli. However, many experiences, such as encountering earthquakes or accidental fall from tree branches, may produce long-lasting fear memories but are behaviorally difficult to measure using freezing parameters. Here, we have examined changes in heartbeat interval dynamics as physiological readout for assessing fearful reactions as mice were subjected to sudden air puff, free-fall drop inside a small elevator, and a laboratory-version earthquake. We showed that these fearful events rapidly increased heart rate (HR) with simultaneous reduction of heart rate variability (HRV). Cardiac changes can be further analyzed in details by measuring three distinct phases: namely, the rapid rising phase in HR, the maximum plateau phase during which HRV is greatly decreased, and the recovery phase during which HR gradually recovers to baseline values. We showed that durations of the maximum plateau phase and HR recovery speed were quite sensitive to habituation over repeated trials. Moreover, we have developed the fear resistance index based on specific cardiac response features. We demonstrated that the fear resistance index remained largely consistent across distinct fearful events in a given animal, thereby enabling us to compare and rank individual mouse’s fear responsiveness among the group. Therefore, the fear resistance index described here can represent a useful parameter for measuring personality traits or individual differences in stress-susceptibility in both wild-type mice and post-traumatic stress disorder (PTSD) models. PMID:24667366

  18. Predicting earthquakes along the major plate tectonic boundaries in the Pacific

    USGS Publications Warehouse

    Spall, H.

    1978-01-01

    In an article in the last issue of the Earthquake Information Bulletin ("Earthquakes and Plate Tectonics," by Henry Spall), we saw how 90 percent of the world's earthquakes occur at the margins of the Earth's major crustal plates. however, when we look at the distribution of earthquakes in detail, we see that a number of nearly aseismic regions, or seismic gaps, can be found along the present-day plate boundaries. Why is this? And can we regard these areas as being more likely to be the sites for future larger earthquakes than those segments of the plate boundaries that have ruptured recently. 

  19. The 26 January 2001 M 7.6 Bhuj, India, earthquake: Observed and predicted ground motions

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.; Bilham, R.; Atkinson, G.M.

    2002-01-01

    Although local and regional instrumental recordings of the devastating 26, January 2001, Bhuj earthquake are sparse, the distribution of macroseismic effects can provide important constraints on the mainshock ground motions. We compiled available news accounts describing damage and other effects and interpreted them to obtain modified Mercalli intensities (MMIs) at >200 locations throughout the Indian subcontinent. These values are then used to map the intensity distribution throughout the subcontinent using a simple mathematical interpolation method. Although preliminary, the maps reveal several interesting features. Within the Kachchh region, the most heavily damaged villages are concentrated toward the western edge of the inferred fault, consistent with western directivity. Significant sediment-induced amplification is also suggested at a number of locations around the Gulf of Kachchh to the south of the epicenter. Away from the Kachchh region, intensities were clearly amplified significantly in areas that are along rivers, within deltas, or on coastal alluvium, such as mudflats and salt pans. In addition, we use fault-rupture parameters inferred from teleseismic data to predict shaking intensity at distances of 0-1000 km. We then convert the predicted hard-rock ground-motion parameters to MMI by using a relationship (derived from Internet-based intensity surveys) that assigns MMI based on the average effects in a region. The predicted MMIs are typically lower by 1-3 units than those estimated from news accounts, although they do predict near-field ground motions of approximately 80%g and potentially damaging ground motions on hard-rock sites to distances of approximately 300 km. For the most part, this discrepancy is consistent with the expected effect of sediment response, but it could also reflect other factors, such as unusually high building vulnerability in the Bhuj region and a tendency for media accounts to focus on the most dramatic damage, rather than

  20. The USGS plan for short-term prediction of the anticipated Parkfield earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    1988-01-01

    Aside from the goal of better understanding the Parkfield earthquake cycle, it is the intention of the U.S Geological Survey to attempt to issue a warning shortly before the anticipated earthquake. Although short-term earthquake warnings are not yet generally feasible, the wealth of information available for the previous significant Parkfield earthquakes suggests that if the next earthquake follows the pattern of "characteristic" Parkfield shocks, such a warning might be possible. Focusing on earthquake precursors reported for the previous  "characteristic" shocks, particulary the 1934 and 1966 events, the USGS developed a plan* in late 1985 on which to base earthquake warnings for Parkfield and has assisted State, county, and local officials in the Parkfield area to prepare a coordinated, reasonable response to a warning, should one be issued. 

  1. Predictability of catastrophic events: Material rupture, earthquakes, turbulence, financial crashes, and human birth

    PubMed Central

    Sornette, Didier

    2002-01-01

    We propose that catastrophic events are “outliers” with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. We describe a unifying approach for modeling and predicting these catastrophic events or “ruptures,” that is, sudden transitions from a quiescent state to a crisis. Such ruptures involve interactions between structures at many different scales. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence, and abrupt changes of weather regimes, financial crashes, and human parturition (birth). Future improvements will involve combining ideas and tools from statistical physics and artificial/computational intelligence, to identify and classify possible universal structures that occur at different scales, and to develop application-specific methodologies to use these structures for prediction of the “crises” known to arise in each application of interest. We live on a planet and in a society with intermittent dynamics rather than a state of equilibrium, and so there is a growing and urgent need to sensitize students and citizens to the importance and impacts of ruptures in their multiple forms. PMID:11875205

  2. High-Magnitude (>Mw8.0) Megathrust Earthquakes and the Subduction of Thick Sediment, Tectonic Debris, and Smooth Sea Floor

    NASA Astrophysics Data System (ADS)

    Scholl, D. W.; Kirby, S. H.; von Huene, R.; Ryan, H. F.; Wells, R. E.

    2014-12-01

    INTRODUCTION: Ruff (1989, Pure and Applied Geophysics, v. 129) proposed that thick or excess sediment entering the subduction zone (SZ) smooths and strengthens the trench-parallel distribution of interplate coupling strength. This circumstance was conjectured to favor rupture continuation and the generation interplate thrusts (IPTs) of magnitude >Mw8.2. But, statistically, the correlation of excess sediment and high magnitude IPTs was deemed "less than compelling". NEW OBSERVATIONS: Using a larger and better vetted catalog of instrumental era (1899 through Jan. 2013) IPTs of magnitude Mw7.5 to 9.5 (n=176), and a far more accurate compilation of trench sediment thickness, we tested if, in fact, a compelling correlation exists between the occurrence of great IPTs and where thick (>1.0-1.5 km) vs thin (<1.0-0.5 km) sedimentary sections enter the SZ. Based on the new compilations, a statistically supported statement can be made that great megathrusts are most prone to nucleate at well-sedimented SZs. Despite the shorter (by 7500 km) global length of thick- (vs thin) sediment trenches, ~53% of all instrumental events of magnitude >Mw8.0, ~75% of events >Mw8.5, and 100% of IPTs >Mw9.0 occurred at thick-sediment trenches. No event >Mw9.0 ruptured at thin-sediment trenches, three super giant IPTs (1960 Chile Mw9.5, 1964 Alaska Mw9.2, and 2004 Sumatra Mw9.2) occurred at thick-sediment trenches. Significantly, however, large Mw8.0-9.0 events also commonly (n=23) nucleated at thin-sediment trenches. These IPTs are associated with the subduction of low-relief oceanic crust and where the debris of subduction erosion thickens the subduction channel separating the two plates. INFERENCES: Our new, larger, and corrected date compilations support the conjecture by Ruff (1989) that subduction of a thick section of sediment favors rupture continuation and nucleation of high magnitude Mw8.0 to 9.5 IPTs. This observation can be linked to a causative mechanism of sediment

  3. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  4. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  5. Late Holocene slip rate of the San Andreas fault and its accommodation by creep and moderate-magnitude earthquakes at Parkfield, California

    USGS Publications Warehouse

    Toke, N.A.; Arrowsmith, J.R.; Rymer, M.J.; Landgraf, A.; Haddad, D.E.; Busch, M.; Coyan, J.; Hannah, A.

    2011-01-01

    Investigation of a right-laterally offset channel at the Miller's Field paleoseismic site yields a late Holocene slip rate of 26.2 +6.4/-4.3 mm/yr (1??) for the main trace of the San Andreas fault at Park-field, California. This is the first well-documented geologic slip rate between the Carrizo and creeping sections of the San Andreas fault. This rate is lower than Holocene measurements along the Carrizo Plain and rates implied by far-field geodetic measurements (~35 mm/yr). However, the rate is consistent with historical slip rates, measured to the northwest, along the creeping section of the San Andreas fault (<30 mm/yr). The paleoseismic exposures at the Miller's Field site reveal a pervasive fabric of clay shear bands, oriented clockwise oblique to the San Andreas fault strike and extending into the upper-most stratigraphy. This fabric is consistent with dextral aseismic creep and observations of surface slip from the 28 September 2004 M6 Parkfield earthquake. Together, this slip rate and deformation fabric suggest that the historically observed San Andreas fault slip behavior along the Parkfield section has persisted for at least a millennium, and that significant slip is accommodated by structures in a zone beyond the main San Andreas fault trace. ?? 2011 Geological Society of America.

  6. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes.

  7. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes. PMID:17738277

  8. In-situ fluid-pressure measurements for earthquake prediction: An example from a deep well at Hi Vista, California

    USGS Publications Warehouse

    Healy, J.H.; Urban, T.C.

    1985-01-01

    Short-term earthquake prediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present. ?? 1985 Birkha??user Verlag.

  9. Anomalous phenomena in Schumann resonance band observed in China before the 2011 magnitude 9.0 Tohoku-Oki earthquake in Japan

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjuan; Zhou, Zhiquan; Qiao, Xiaolin; Yu, Haiyan

    2013-12-01

    anomalous phenomena in the Schumann resonance (SR) band, possibly associated with the Tohoku-Oki earthquake (EQ), are studied based on the ELF observations at two stations in China. The anomaly appeared on 8 March, 3 days prior to the main shock, and was characterized by an increase in the intensity at frequencies from the first mode to the fourth mode in both magnetic field components, different from the observations in Japan before large EQs in Taiwan. The abnormal behaviors of the north-south and east-west magnetic field components primarily appeared at 0000-0900 UT and 0200-0900 UT on 8 March, respectively. The finite difference time domain numerical method is applied to model the impact of seismic process on the ELF radio propagation. A partially uniform knee model of the vertical conductivity profile suggested by V. C. Mushtak is used to model the day-night asymmetric Earth-ionosphere cavity, and a locally EQ-induced disturbance model of the atmospheric conductivity is introduced. The atmospheric conductivity is assumed to increase around the epicenter according to the localized enhancement of total electron content in the ionosphere. It is concluded that the SR anomalous phenomena before the Tohoku-Oki EQ have much to do with the excited sources located at South America and Asia and also with the localized distribution of the disturbed conductivity. This work is a further confirmation of the relationship of SR anomalies with large EQs and has further concluded that the distortions in the SR band before large EQs may be caused by the irregularities located over the shock epicenter in the Earth-ionosphere cavity by numerical method.

  10. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    NASA Astrophysics Data System (ADS)

    Morley, S. K.; Freeman, M. P.; Tanskanen, E. I.

    2007-11-01

    We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level) agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  11. De-confounding of Relations Between Land-Level and Sea-Level Change, Humboldt Bay, Northern California: Uncertain Predictions of Magnitude and Timing of Tectonic and Eustatic Processes

    NASA Astrophysics Data System (ADS)

    Gilkerson, W.; Leroy, T. H.; Patton, J. R.; Williams, T. B.

    2010-12-01

    Humboldt Bay in Northern California provides a unique opportunity to investigate the effects of relative sea level change on both native flora and maritime aquiculture as influenced by both tectonic and eustatic sea-level changes. This combination of superposed influences makes quantitatively predicting relative sea-level more uncertain and consumption of the results for public planning purposes exceedingly difficult. Public digestion for practical purposes is confounded by the fact that the uncertainty for eustatic sea-level changes is a magnitude issue while the uncertainty associated with the tectonic land level changes is both a magnitude and timing problem. Secondly, the public is less well informed regarding how crustal deformation contributes to relative sea-level change. We model the superposed effects of eustatic sea-level rise and tectonically driven land-level changes on the spatial distribution of habitats suitable to native eelgrass (Zostera marina) and oyster mariculture operations in Humboldt Bay. While these intertidal organisms were chosen primarily because they have vertically restricted spatial distributions that can be successfully modeled, the public awareness of their ecologic and economic importance is also well developed. We employ easy to understand graphics depicting conceptual ideas along with maps generated from the modeling results to develop locally relevant estimates of future sea level rise over the next 100 years, a time frame consistent with local planning. We bracket these estimates based on the range of possible vertical deformation changes. These graphic displays can be used as a starting point to propose local outcomes from global and regional relative sea-level changes with respect to changes in the distribution of suitable habitat for ecologically and economically valuable species. Currently the largest sources of uncertainty for changes in relative sea-level in the Humboldt Bay area are 1) the rate and magnitude of tectonic

  12. Magnitude of daily energy deficit predicts frequency but not severity of menstrual disturbances associated with exercise and caloric restriction

    PubMed Central

    Leidy, Heather J.; Hill, Brenna R.; Lieberman, Jay L.; Legro, Richard S.; Souza, Mary Jane De

    2014-01-01

    We assessed the impact of energy deficiency on menstrual function using controlled feeding and supervised exercise over four menstrual cycles (1 baseline and 3 intervention cycles) in untrained, eumenorrheic women aged 18–30 yr. Subjects were randomized to either an exercising control (EXCON) or one of three exercising energy deficit (ED) groups, i.e., mild (ED1; −8 ± 2%), moderate (ED2; −22 ± 3%), or severe (ED3; −42 ± 3%). Menstrual cycle length and changes in urinary concentrations of estrone-1-glucuronide, pregnanediol glucuronide, and midcycle luteinizing hormone were assessed. Thirty-four subjects completed the study. Weight loss occurred in ED1 (−3.8 ± 0.2 kg), ED2 (−2.8 ± 0.6 kg), and ED3 (−2.6 ± 1.1 kg) but was minimal in EXCON (−0.9 ± 0.7 kg). The overall sum of disturbances (luteal phase defects, anovulation, and oligomenorrhea) was greater in ED2 compared with EXCON and greater in ED3 compared with EXCON AND ED1. The average percent energy deficit was the main predictor of the frequency of menstrual disturbances (f = 10.1, β = −0.48, r2 = 0.23, P = 0.003) even when weight loss was included in the model. The estimates of the magnitude of energy deficiency associated with menstrual disturbances ranged from −22 (ED2) to −42% (ED3), reflecting an energy deficit of −470 to −810 kcal/day, respectively. This is the first study to demonstrate a dose-response relationship between the magnitude of energy deficiency and the frequency of exercise-related menstrual disturbances; however, the severity of menstrual disturbances was not dependent on the magnitude of energy deficiency. PMID:25352438

  13. Liquefaction caused by the 2009 Olancha, California (USA), M5.2 earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Jayko, A.S.; Hauksson, E.; Fletcher, J.P.B.; Noce, T.E.; Bennett, M.J.; Dietel, C.M.; Hudnut, K.W.

    2010-01-01

    The October 3, 2009 (01:16:00 UTC), Olancha M5.2 earthquake caused extensive liquefaction as well as permanent horizontal ground deformation within a 1.2 km2area earthquake in Owens Valley in eastern California (USA). Such liquefaction is rarely observed during earthquakes of M ≤ 5.2. We conclude that subsurface conditions, not unusual ground motion, were the primary factors contributing to the liquefaction. The liquefaction occurred in very liquefiable sands at shallow depth (< 2 m) in an area where the water table was near the land surface. Our investigation is relevant to both geotechnical engineering and geology. The standard engineering method for assessing liquefaction potential, the Seed–Idriss simplified procedure, successfully predicted the liquefaction despite the small earthquake magnitude. The field observations of liquefaction effects highlight a need for caution by earthquake geologists when inferring prehistoric earthquake magnitudes from paleoliquefaction features because small magnitude events may cause such features.

  14. Kindergartners' fluent processing of symbolic numerical magnitude is predicted by their cardinal knowledge and implicit understanding of arithmetic 2years earlier.

    PubMed

    Moore, Alex M; vanMarle, Kristy; Geary, David C

    2016-10-01

    Fluency in first graders' processing of the magnitudes associated with Arabic numerals, collections of objects, and mixtures of objects and numerals predicts current and future mathematics achievement. The quantitative competencies that support the development of fluent processing of magnitude, however, are not fully understood. At the beginning and end of preschool (M=3years 9months at first assessment, range=3years 3months to 4years 3months), 112 children (51 boys) completed tasks measuring numeral recognition and comparison, acuity of the approximate number system, and knowledge of counting principles, cardinality, and implicit arithmetic and also completed a magnitude processing task (number sets test) in kindergarten. Use of Bayesian and linear regression techniques revealed that two measures of preschoolers' cardinal knowledge and their competence at implicit arithmetic predicted later fluency of magnitude processing, controlling domain-general factors, preliteracy skills, and parental education. The results help to narrow the search for the early foundation of children's emerging competence with symbolic mathematics and provide direction for early interventions. PMID:27236038

  15. The 2011 Japanese 9.0 magnitude earthquake: Test of a kinetic energy wave model using coastal configuration and offshore gradient of Earth and beyond

    NASA Astrophysics Data System (ADS)

    Mahaney, William C.; Dohm, James M.

    2011-07-01

    gradients would produce higher-energy waves of greater magnitude, as attested by greater penetration inland with considerable loss of life and property damage, and resulting highly deformed grain surfaces. The model could be applicable for assessing modern and ancient coastal environmental conditions on Earth and postulated ancient marine conditions on the Red Planet.

  16. Earthquakes, November-December 1975

    USGS Publications Warehouse

    Person, W.J.

    1976-01-01

    Hawaii experienced its strongest earthquake in more than a century. The magnitude 7.2 earthquake on November 29, killed at least 2 and injured about 35. These were the first deaths from an earthquake in the United States dince the San Fernando earthquake of Febraury 1971. 

  17. Telescopic limiting magnitudes

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1990-01-01

    The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.

  18. Predicted Surface Displacements for Scenario Earthquakes in the San Francisco Bay Region

    USGS Publications Warehouse

    Murray-Moraleda, Jessica R.

    2008-01-01

    In the immediate aftermath of a major earthquake, the U.S. Geological Survey (USGS) will be called upon to provide information on the characteristics of the event to emergency responders and the media. One such piece of information is the expected surface displacement due to the earthquake. In conducting probabilistic hazard analyses for the San Francisco Bay Region, the Working Group on California Earthquake Probabilities (WGCEP) identified a series of scenario earthquakes involving the major faults of the region, and these were used in their 2003 report (hereafter referred to as WG03) and the recently released 2008 Uniform California Earthquake Rupture Forecast (UCERF). Here I present a collection of maps depicting the expected surface displacement resulting from those scenario earthquakes. The USGS has conducted frequent Global Positioning System (GPS) surveys throughout northern California for nearly two decades, generating a solid baseline of interseismic measurements. Following an earthquake, temporary GPS deployments at these sites will be important to augment the spatial coverage provided by continuous GPS sites for recording postseismic deformation, as will the acquisition of Interferometric Synthetic Aperture Radar (InSAR) scenes. The information provided in this report allows one to anticipate, for a given event, where the largest displacements are likely to occur. This information is valuable both for assessing the need for further spatial densification of GPS coverage before an event and prioritizing sites to resurvey and InSAR data to acquire in the immediate aftermath of the earthquake. In addition, these maps are envisioned to be a resource for scientists in communicating with emergency responders and members of the press, particularly during the time immediately after a major earthquake before displacements recorded by continuous GPS stations are available.

  19. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data

    SciTech Connect

    N. Seth Carpenter; Suzette J. Payne; Annette L. Schafer

    2012-06-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range, U.S.A. faults. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths (L{sub seg}) where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements (D) along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M {approx} L{sub seg}, should equal M {approx} D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating L{sub seg} with surface rupture length (SRL). Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M {approx} SRL relationship using L{sub seg} as SRL leads to an underestimation of magnitude and the M {approx} L{sub seg} and M {approx} D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude (Mw) and length, where length is L{sub seg} instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw {approx} L{sub seg} results are strikingly consistent

  20. Ionospheric precursors for crustal earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Perrone, L.; Korsunova, L. P.; Mikhailov, A. V.

    2010-04-01

    Crustal earthquakes with magnitude 6.0>M≥5.5 observed in Italy for the period 1979-2009 including the last one at L'Aquila on 6 April 2009 were considered to check if the earlier obtained relationships for ionospheric precursors for strong Japanese earthquakes are valid for the Italian moderate earthquakes. The ionospheric precursors are based on the observed variations of the sporadic E-layer parameters (h'Es, fbEs) and foF2 at the ionospheric station Rome. Empirical dependencies for the seismo-ionospheric disturbances relating the earthquake magnitude and the epicenter distance are obtained and they have been shown to be similar to those obtained earlier for Japanese earthquakes. The dependences indicate the process of spreading the disturbance from the epicenter towards periphery during the earthquake preparation process. Large lead times for the precursor occurrence (up to 34 days for M=5.8-5.9) tells about a prolong preparation period. A possibility of using the obtained relationships for the earthquakes prediction is discussed.

  1. Earthquake occurrence and effects.

    PubMed

    Adams, R D

    1990-01-01

    Although earthquakes are mainly concentrated in zones close to boundaries of tectonic plates of the Earth's lithosphere, infrequent events away from the main seismic regions can cause major disasters. The major cause of damage and injury following earthquakes is elastic vibration, rather than fault displacement. This vibration at a particular site will depend not only on the size and distance of the earthquake but also on the local soil conditions. Earthquake prediction is not yet generally fruitful in avoiding earthquake disasters, but much useful planning to reduce earthquake effects can be done by studying the general earthquake hazard in an area, and taking some simple precautions. PMID:2347628

  2. Beating the Shakes: Predicting and Controlling the Effects of Earthquakes. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1992

    1992-01-01

    This learning module gives background information on earthquakes, their measurement, and sociocultural impact. A design brief contains context, objectives, challenge to students, evaluation method, student quiz, outcomes, glossary, and eight references. (SK)

  3. Prediction of central California earthquakes from soil-gas helium fluctuations

    USGS Publications Warehouse

    Reimer, G.M.

    1985-01-01

    The observations of short-term decreases in helium soil-gas concentrations along the San Andreas Fault in central California have been correlated with subsequent earthquake activity. The area of study is elliptical in shape with radii approximately 160??80 km, centered near San Benito, and with the major axis parallel to the Fault. For 83 percent of the M>4 earthquakes in this area a helium decrease preceded seismic activity by 1.5 to 6.5 weeks. There were several earthquakes without a decrease and several decreases without a corresponding earthquake. Owing to complex and unresolved interaction of many geophysical and geochemical parameters, no suitable model is yet developed to explain the observations. ?? 1985 Birkha??user Verlag.

  4. Prediction of central California earthquakes from soil-gas helium fluctuations

    NASA Astrophysics Data System (ADS)

    Reimer, G. M.

    1984-03-01

    The observations of short-term decreases in helium soil-gas concentrations along the San Andreas Fault in central California have been correlated with subsequent earthquake activity. The area of study is elliptical in shape with radii approximately 160×80 km, centered near San Benito, and with the major axis parallel to the Fault. For 83 percent of the M>4 earthquakes in this area a helium decrease preceded seismic activity by 1.5 to 6.5 weeks. There were several earthquakes without a decrease and several decreases without a corresponding earthquake. Owing to complex and unresolved interaction of many geophysical and geochemical parameters, no suitable model is yet developed to explain the observations.

  5. On a report that the 2012 M 6.0 earthquake in Italy was predicted after seeing an unusual cloud formation

    USGS Publications Warehouse

    Thomas, J.N.; Masci, F; Love, Jeffrey J.

    2015-01-01

    Several recently published reports have suggested that semi-stationary linear-cloud formations might be causally precursory to earthquakes. We examine the report of Guangmeng and Jie (2013), who claim to have predicted the 2012 M 6.0 earthquake in the Po Valley of northern Italy after seeing a satellite photograph (a digital image) showing a linear-cloud formation over the eastern Apennine Mountains of central Italy. From inspection of 4 years of satellite images we find numerous examples of linear-cloud formations over Italy. A simple test shows no obvious statistical relationship between the occurrence of these cloud formations and earthquakes that occurred in and around Italy. All of the linear-cloud formations we have identified in satellite images, including that which Guangmeng and Jie (2013) claim to have used to predict the 2012 earthquake, appear to be orographic – formed by the interaction of moisture-laden wind flowing over mountains. Guangmeng and Jie (2013) have not clearly stated how linear-cloud formations can be used to predict the size, location, and time of an earthquake, and they have not published an account of all of their predictions (including any unsuccessful predictions). We are skeptical of the validity of the claim by Guangmeng and Jie (2013) that they have managed to predict any earthquakes.

  6. Predicting the type, location and magnitude of geomorphic responses to dam removal: Role of hydrologic and geomorphic constraints

    NASA Astrophysics Data System (ADS)

    Gartner, John D.; Magilligan, Francis J.; Renshaw, Carl E.

    2015-12-01

    Using a dam removal on the Ashuelot River in southern New Hampshire, we test how a sudden, spatially non-uniform increase in river slope alters sediment transport dynamics and riparian sediment connectivity. Site conditions were characterized by detailed pre- and post-removal field surveys and high-resolution aerial lidar data, and locations of erosion and deposition were predicted through one-dimensional hydrodynamic modeling. The Homestead Dam was a ~ 200 year old, 4 m high, 50 m wide crib dam that created a 9.5 km long, relatively narrow reservoir. Following removal, an exhumed resistant bed feature of glaciofluvial boulders located 400 m upstream and ~ 2.5 m lower than the crest of the dam imposed a new boundary condition in the drained reservoir, acting as a grade control that maintained a backwater effect upstream. During the 15 months following removal, non-uniform erosion in the former reservoir totaled ~ 60,000 m3 (equivalent to ~ 9.3 cm when averaged across the reservoir). Net deposition of ~ 10,700 m3 was measured downstream of the dam, indicating most sediment from the reservoir was carried more than 8 km downstream beyond the study area. The most pronounced bed erosion occurred where modeled sediment transport increased in the downstream direction, and deposition occurred both within and downstream of the former reservoir where modeled sediment transport decreased in the downstream direction. We thus demonstrate that spatial gradients in sediment transport can be used to predict locations of erosion and deposition on the stream bed. We further observed that bed incision was not a necessary condition for bank erosion in the former reservoir. In this characteristically narrow and shallow reservoir lacking abundant dam-induced sedimentation, the variable resistance of the bed and banks acted as geomorphic constraints. Overall, the response deviated from the common conceptual model of knickpoint erosion and channel widening due to dam removal. With

  7. Changes in rat urinary porphyrin profiles predict the magnitude of the neurotoxic effects induced by a mixture of lead, arsenic and manganese.

    PubMed

    Andrade, Vanda; Mateus, M Luísa; Batoréu, M Camila; Aschner, Michael; Marreilha dos Santos, A P

    2014-12-01

    The neurotoxic metals lead (Pb), arsenic (As) and manganese (Mn) are ubiquitous contaminants occurring as mixtures in environmental settings. The three metals may interfere with enzymes of the heme bioshyntetic pathway, leading to excessive porphyrin accumulation, which per se may trigger neurotoxicity. Given the multi-mechanisms associated with metal toxicity, we posited that a single biomarker is unlikely to predict neurotoxicity that is induced by a mixture of metals. Our objective was to evaluate the ability of a combination of urinary porphyrins to predict the magnitude of motor activity impairment induced by a mixture of Pb/As/Mn. Five groups of Wistar rats were treated for 8 days with Pb (5mg/kg), As (60 mg/L) or Mn (10mg/kg), and the 3-metal mixture (same doses as the single metals) along with a control group. Motor activity was evaluated after the administration of the last dose and 24-hour (h) urine was also collected after the treatments. Porphyrin profiles were determined both in the urine and brain. Rats treated with the metal-mixture showed a significant decrease in motor parameters compared with controls and the single metal-treated groups. Both brain and urinary porphyrin levels, when combined and analyzed by multiple linear regressions, were predictable of motor activity (p<0.05). The magnitude of change in urinary porphyrin profiles was consistent with the greatest impairments in motor activity as determined by receiver operating characteristic (ROC) curves, with a sensitivity of 88% and a specificity of 96%. Our work strongly suggests that the use of a linear combination of urinary prophyrin levels accurately predicts the magnitude of motor impairments in rats that is induced by a mixture of Pb, As and Mn.

  8. Earthquakes: hydrogeochemical precursors

    USGS Publications Warehouse

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  9. Earthquakes, July-August 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    There was one major earthquake during this reporting period-a magnitude 7.1 shock off the coast of Northern California on August 17. Earthquake-related deaths were reported from Indonesia, Romania, Peru, and Iraq. 

  10. Summary of the GK15 ground‐motion prediction equation for horizontal PGA and 5% damped PSA from shallow crustal continental earthquakes

    USGS Publications Warehouse

    Graizer, Vladimir;; Kalkan, Erol

    2016-01-01

    We present a revised ground‐motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration (PGA) and 5% damped pseudospectral acceleration (PSA) response ordinates of the horizontal component of randomly oriented ground motions to be used for seismic‐hazard analyses and engineering applications. This GMPE is derived from the expanded Next Generation Attenuation (NGA)‐West 1 database (see Data and Resources; Chiou et al., 2008). The revised model includes an anelastic attenuation term as a function of quality factor (Q0) to capture regional differences in far‐source (beyond 150 km) attenuation, and a new frequency‐dependent sedimentary‐basin scaling term as a function of depth to the 1.5  km/s shear‐wave velocity isosurface to improve ground‐motion predictions at sites located on deep sedimentary basins. The new Graizer–Kalkan 2015 (GK15) model, developed to be simple, is applicable for the western United States and other similar shallow crustal continental regions in active tectonic environments for earthquakes with moment magnitudes (M) 5.0–8.0, distances 0–250 km, average shear‐wave velocities in the upper 30 m (VS30) 200–1300  m/s, and spectral periods (T) 0.01–5 s. Our aleatory variability model captures interevent (between‐event) variability, which decreases with magnitude and increases with distance. The mixed‐effect residuals analysis reveals that the GK15 has no trend with respect to the independent predictor parameters. Compared to our 2007–2009 GMPE, the PGA values are very similar, whereas spectral ordinates predicted are larger at T<0.2  s and they are smaller at longer periods.

  11. Earthquake watch

    USGS Publications Warehouse

    Hill, M.

    1976-01-01

     When the time comes that earthquakes can be predicted accurately, what shall we do with the knowledge? This was the theme of a November 1975 conference on earthquake warning and response held in San Francisco called by Assistant Secretary of the Interior Jack W. Carlson. Invited were officials of State and local governments from Alaska, California, Hawaii, Idaho, Montana, Nevada, utah, Washington, and Wyoming and representatives of the news media. 

  12. Estimating density and vertical stress magnitudes using hydrocarbon exploration data in the onshore Northern Niger Delta Basin, Nigeria: Implication for overpressure prediction

    NASA Astrophysics Data System (ADS)

    Adewole, E. O.; Macdonald, D. I. M.; Healy, D.

    2016-11-01

    Different techniques for predicting density have been driven by the generally poor resolution with depth, and the lack of good quality density data for quantitative formation evaluation studies. The accuracy of most empirical methods is sometimes affected by the methodological uncertainty, but mainly influenced by the reliability of the input data. The benefits of using different methods in density prediction were demonstrated on the basis of a pore pressure prediction study from the Northern Niger Delta Basin (NNDB). The assessment of three density prediction methods (Wyllie, Gardner and Bellotti & Giacca) shows that the Wyllie's method is the preferred choice for the density prediction in the study area because it has the lowest Least Squares Misfit (LSM) error of 0.0019. The vertical stress gradients (constrained from densities) vary vertically with depth from 19 MPa/km (near the surface) to 25.7 MPa/km (at 4 km depth) and laterally between wells, particularly at the top of high magnitude overpressures (at 3.5 km) from 23.6 MPa/km to 25.0 MPa/km. The differences in the vertical stress gradients are consistent with the density variations observed in the area, and they have implications for the predicted pore pressures. We found there to be an increase in pressure gradient from the top to the bottom of wells, which is consistent with data from 87 wells in the area of study. We have therefore been able to identify three main pressure magnitudes in the area of study: (i) low pressure (close to hydrostatic pressure), (ii) abnormal pressure (a little above hydrostatic) and (iii) high pressure (far above hydrostatic).

  13. Regression problems for magnitudes

    NASA Astrophysics Data System (ADS)

    Castellaro, S.; Mulargia, F.; Kagan, Y. Y.

    2006-06-01

    Least-squares linear regression is so popular that it is sometimes applied without checking whether its basic requirements are satisfied. In particular, in studying earthquake phenomena, the conditions (a) that the uncertainty on the independent variable is at least one order of magnitude smaller than the one on the dependent variable, (b) that both data and uncertainties are normally distributed and (c) that residuals are constant are at times disregarded. This may easily lead to wrong results. As an alternative to least squares, when the ratio between errors on the independent and the dependent variable can be estimated, orthogonal regression can be applied. We test the performance of orthogonal regression in its general form against Gaussian and non-Gaussian data and error distributions and compare it with standard least-square regression. General orthogonal regression is found to be superior or equal to the standard least squares in all the cases investigated and its use is recommended. We also compare the performance of orthogonal regression versus standard regression when, as often happens in the literature, the ratio between errors on the independent and the dependent variables cannot be estimated and is arbitrarily set to 1. We apply these results to magnitude scale conversion, which is a common problem in seismology, with important implications in seismic hazard evaluation, and analyse it through specific tests. Our analysis concludes that the commonly used standard regression may induce systematic errors in magnitude conversion as high as 0.3-0.4, and, even more importantly, this can introduce apparent catalogue incompleteness, as well as a heavy bias in estimates of the slope of the frequency-magnitude distributions. All this can be avoided by using the general orthogonal regression in magnitude conversions.

  14. Naïve CD4+ T cell frequency varies for different epitopes and predicts repertoire diversity and response magnitude

    PubMed Central

    Moon, James J.; Chu, H. Hamlet; Pepper, Marion; McSorley, Stephen J.; Jameson, Stephen C.; Kedl, Ross M.; Jenkins, Marc K.

    2007-01-01

    Summary Cell-mediated immunity stems from the proliferation of naïve T lymphocytes expressing T cell antigen receptors (TCR) specific for foreign peptides bound to host Major Histocompatibility Complex (MHC) molecules. Due to the tremendous diversity of the T cell repertoire, naïve T cells specific for any one peptide:MHC complex (pMHC) are extremely rare. Thus, it is not known how many naïve T cells of any given pMHC specificity exist in the body or how that number influences the immune response. Using soluble pMHCII tetramers and magnetic bead enrichment, we found that three different pMHCII-specific naïve CD4+ T cell populations vary in frequency from 20 to 200 cells per mouse. Moreover, naïve population size predicted the size and TCR diversity of the primary CD4+ T cell response after immunization with relevant peptide. Thus, variation in naive T cell frequencies can explain why some peptides are stronger immunogens than others. PMID:17707129

  15. Naive CD4(+) T cell frequency varies for different epitopes and predicts repertoire diversity and response magnitude.

    PubMed

    Moon, James J; Chu, H Hamlet; Pepper, Marion; McSorley, Stephen J; Jameson, Stephen C; Kedl, Ross M; Jenkins, Marc K

    2007-08-01

    Cell-mediated immunity stems from the proliferation of naive T lymphocytes expressing T cell antigen receptors (TCRs) specific for foreign peptides bound to host major histocompatibility complex (MHC) molecules. Because of the tremendous diversity of the T cell repertoire, naive T cells specific for any one peptide:MHC complex (pMHC) are extremely rare. Thus, it is not known how many naive T cells of any given pMHC specificity exist in the body or how that number influences the immune response. By using soluble pMHC class II (pMHCII) tetramers and magnetic bead enrichment, we found that three different pMHCII-specific naive CD4(+) T cell populations vary in frequency from 20 to 200 cells per mouse. Moreover, naive population size predicted the size and TCR diversity of the primary CD4(+) T cell response after immunization with relevant peptide. Thus, variation in naive T cell frequencies can explain why some peptides are stronger immunogens than others. PMID:17707129

  16. The predictability and magnitude of life-history divergence to ecological agents of selection: a meta-analysis in livebearing fishes.

    PubMed

    Moore, Michael P; Riesch, Rüdiger; Martin, Ryan A

    2016-04-01

    Environments causing variation in age-specific mortality - ecological agents of selection - mediate the evolution of reproductive life-history traits. However, the relative magnitude of life-history divergence across selective agents, whether divergence in response to specific selective agents is consistent across taxa and whether it occurs as predicted by theory, remains largely unexplored. We evaluated divergence in offspring size, offspring number, and the trade-off between these traits using a meta-analysis in livebearing fishes (Poeciliidae). Life-history divergence was consistent and predictable to some (predation, hydrogen sulphide) but not all (density, food limitation, salinity) selective agents. In contrast, magnitudes of divergence among selective agents were similar. Finally, there was a negative, asymmetric relationship between offspring-number and offspring-size divergence, suggesting greater costs of increasing offspring size than number. Ultimately, these results provide strong evidence for predictable and consistent patterns of reproductive life-history divergence and highlight the importance of comparing phenotypic divergence across species and ecological selective agents. PMID:26879778

  17. Risk factors for 30‐day mortality after resection of lung cancer and prediction of their magnitude

    PubMed Central

    Strand, Trond‐Eirik; Rostad, Hans; Damhuis, Ronald A M; Norstein, Jarle

    2007-01-01

    Background There is considerable variability in reported postoperative mortality and risk factors for mortality after surgery for lung cancer. Population‐based data provide unbiased estimates and may aid in treatment selection. Methods All patients diagnosed with lung cancer in Norway from 1993 to the end of 2005 were reported to the Cancer Registry of Norway (n = 26 665). A total of 4395 patients underwent surgical resection and were included in the analysis. Data on demographics, tumour characteristics and treatment were registered. A subset of 1844 patients was scored according to the Charlson co‐morbidity index. Potential factors influencing 30‐day mortality were analysed by logistic regression. Results The overall postoperative mortality rate was 4.4% within 30 days with a declining trend in the period. Male sex (OR 1.76), older age (OR 3.38 for age band 70–79 years), right‐sided tumours (OR 1.73) and extensive procedures (OR 4.54 for pneumonectomy) were identified as risk factors for postoperative mortality in multivariate analysis. Postoperative mortality at high‐volume hospitals (⩾20 procedures/year) was lower (OR 0.76, p = 0.076). Adjusted ORs for postoperative mortality at individual hospitals ranged from 0.32 to 2.28. The Charlson co‐morbidity index was identified as an independent risk factor for postoperative mortality (p = 0.017). A prediction model for postoperative mortality is presented. Conclusions Even though improvements in postoperative mortality have been observed in recent years, these findings indicate a further potential to optimise the surgical treatment of lung cancer. Hospital treatment results varied but a significant volume effect was not observed. Prognostic models may identify patients requiring intensive postoperative care. PMID:17573442

  18. Modeling earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Charpentier, Arthur; Durand, Marilou

    2015-07-01

    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  19. Magnitude correlations and dynamical scaling for seismicity

    SciTech Connect

    Godano, Cataldo; Lippiello, Eugenio; De Arcangelis, Lucilla

    2007-12-06

    We analyze the experimental seismic catalog of Southern California and we show the existence of correlations between earthquake magnitudes. We propose a dynamical scaling hypothesis relating time and magnitude as the physical mechanism responsible of the observed magnitude correlations. We show that experimental distributions in size and time naturally originate solely from this scaling hypothesis. Furthermore we generate a synthetic catalog reproducing the organization in time and magnitude of experimental data.

  20. Optimal observation time window for forecasting the next earthquake

    SciTech Connect

    Omi, Takahiro; Shinomoto, Shigeru; Kanter, Ido

    2011-02-15

    We report that the accuracy of predicting the occurrence time of the next earthquake is significantly enhanced by observing the latest rate of earthquake occurrences. The observation period that minimizes the temporal uncertainty of the next occurrence is on the order of 10 hours. This result is independent of the threshold magnitude and is consistent across different geographic areas. This time scale is much shorter than the months or years that have previously been considered characteristic of seismic activities.

  1. Satellite relay telemetry of seismic data in earthquake prediction and control

    USGS Publications Warehouse

    Jackson, Wayne H.; Eaton, Jerry P.

    1971-01-01

    The Satellite Telemetry Earthquake Monitoring Program was started in FY 1968 to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research (NCER) in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project.

  2. The size of earthquakes

    USGS Publications Warehouse

    Kanamori, H.

    1980-01-01

    How we should measure the size of an earthquake has been historically a very important, as well as a very difficult, seismological problem. For example, figure 1 shows the loss of life caused by earthquakes in recent times and clearly demonstrates that 1976 was the worst year for earthquake casualties in the 20th century. However, the damage caused by an earthquake is due not only to its physical size but also to other factors such as where and when it occurs; thus, figure 1 is not necessarily an accurate measure of the "size" of earthquakes in 1976. the point is that the physical process underlying an earthquake is highly complex; we therefore cannot express every detail of an earthquake by a simple straightforward parameter. Indeed, it would be very convenient if we could find a single number that represents the overall physical size of an earthquake. This was in fact the concept behind the Richter magnitude scale introduced in 1935. 

  3. Earthquakes and plate tectonics.

    USGS Publications Warehouse

    Spall, H.

    1982-01-01

    Earthquakes occur at the following three kinds of plate boundary: ocean ridges where the plates are pulled apart, margins where the plates scrape past one another, and margins where one plate is thrust under the other. Thus, we can predict the general regions on the earth's surface where we can expect large earthquakes in the future. We know that each year about 140 earthquakes of magnitude 6 or greater will occur within this area which is 10% of the earth's surface. But on a worldwide basis we cannot say with much accuracy when these events will occur. The reason is that the processes in plate tectonics have been going on for millions of years. Averaged over this interval, plate motions amount to several mm per year. But at any instant in geologic time, for example the year 1982, we do not know, exactly where we are in the worldwide cycle of strain build-up and strain release. Only by monitoring the stress and strain in small areas, for instance, the San Andreas fault, in great detail can we hope to predict when renewed activity in that part of the plate tectonics arena is likely to take place. -from Author

  4. A landslide susceptibility prediction on a sample slope in Kathmandu Nepal associated with the 2015's Gorkha Earthquake

    NASA Astrophysics Data System (ADS)

    Kubota, Tetsuya; Prasad Paudel, Prem

    2016-04-01

    In 2013, some landslides induced by heavy rainfalls occurred in southern part of Kathmandu, Nepal which is located southern suburb of Kathmandu, the capital. These landslide slopes hit by the strong Gorkha Earthquake in April 2015 and seemed to destabilize again. Hereby, to clarify their susceptibility of landslide in the earthquake, one of these landslide slopes was analyzed its slope stability by CSSDP (Critical Slip Surface analysis by Dynamic Programming based on limit equilibrium method, especially Janbu method) against slope failure with various seismic acceleration observed around Kathmandu in the Gorkha Earthquake. The CSSDP can detect the landslide slip surface which has minimum Fs (factor of safety) automatically using dynamic programming theory. The geology in this area mainly consists of fragile schist and it is prone to landslide occurrence. Field survey was conducted to obtain topological data such as ground surface and slip surface cross section. Soil parameters obtained by geotechnical tests with field sampling were applied. Consequently, the slope has distinctive characteristics followings in terms of slope stability: (1) With heavy rainfall, it collapsed and had a factor of safety Fs <1.0 (0.654 or more). (2) With seismic acceleration of 0.15G (147gal) observed around Kathmandu, it has Fs=1.34. (3) With possible local seismic acceleration of 0.35G (343gal) estimated at Kathmandu, it has Fs=0.989. If it were very shallow landslide and covered with cedars, it could have Fs =1.055 due to root reinforcement effect to the soil strength. (4) Without seismic acceleration and with no rainfall condition, it has Fs=1.75. These results can explain the real landslide occurrence in this area with the maximum seismic acceleration estimated as 0.15G in the vicinity of Kathmandu by the Gorkha Earthquake. Therefore, these results indicate landslide susceptibility of the slopes in this area with strong earthquake. In this situation, it is possible to predict

  5. Crowdsourced earthquake early warning.

    PubMed

    Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L

    2015-04-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167

  6. Crowdsourced earthquake early warning

    PubMed Central

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167

  7. Crowdsourced earthquake early warning.

    PubMed

    Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L

    2015-04-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  8. Earthquake ground motion prediction for real sedimentary basins: which numerical schemes are applicable?

    NASA Astrophysics Data System (ADS)

    Moczo, P.; Kristek, J.; Galis, M.; Pazak, P.

    2009-12-01

    Numerical prediction of earthquake ground motion in sedimentary basins and valleys often has to account for P-wave to S-wave speed ratios (Vp/Vs) as large as 5 and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 in unconsolidated sediments (e.g. in Ciudad de México). In a process of developing 3D optimally-accurate finite-difference schemes we encountered a serious problem with accuracy in media with large Vp/Vs ratio. This led us to investigate the very fundamental reasons for the inaccuracy. In order to identify the very basic inherent aspects of the numerical schemes responsible for their behavior with varying Vp/Vs ratio, we restricted to the most basic 2nd-order 2D numerical schemes on a uniform grid in a homogeneous medium. Although basic in the specified sense, the schemes comprise the decisive features for accuracy of wide class of numerical schemes. We investigated 6 numerical schemes: finite-difference_displacement_conventional grid (FD_D_CG) finite-element_Lobatto integration (FE_L) finite-element_Gauss integration (FE_G) finite-difference_displacement-stress_partly-staggered grid (FD_DS_PSG) finite-difference_displacement-stress_staggered grid (FD_DS_SG) finite-difference_velocity-stress_staggered grid (FD_VS_SG) We defined and calculated local errors of the schemes in amplitude and polarization. Because different schemes use different time steps, they need different numbers of time levels to calculate solution for a desired time window. Therefore, we normalized errors for a unit time. The normalization allowed for a direct comparison of errors of different schemes. Extensive numerical calculations for wide ranges of values of the Vp/Vs ratio, spatial sampling ratio, stability ratio, and entire range of directions of propagation with respect to the spatial grid led to interesting and surprising findings. Accuracy of FD_D_CG, FE_L and FE_G strongly depends on Vp/Vs ratio. The schemes are not

  9. Earthquakes; July-August 1977

    USGS Publications Warehouse

    Person, W.J.

    1978-01-01

    July and August were somewhat active seismically speaking, compared to previous months of this year. There were seven earthquakes having magnitudes of 6.5 or greater. The largest was a magnitudes of 6.5 or greater. The largest was a magnitude 8.0 earthquake south of Sumbawa Island on August 19 that killed at least 111. The United States experienced a number of earthquakes during this period, but only one, in California, caused some minor damage. 

  10. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic risk to earthquakes are increasing steadily as urbanization and development occupy more areas that a prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The earthquake scenarios are intended as a basic input for developing detailed earthquake damage scenarios for the cities and can be used in earthquake-safe town and infrastructure planning. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia area that is the most populated (the population is of more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=9-10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK). Almost a century later (95 years) an earthquake of moment magnitude 5.6 (I0=7-8 MSK) hit the city of Sofia, on May 22nd, 2012. In the present study as a deterministic scenario event is considered a damaging earthquake with higher probability of occurrence that could affect the city with intensity less than or equal to VIII

  11. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  12. An Effective Hybrid Support Vector Regression with Chaos-Embedded Biogeography-Based Optimization Strategy for Prediction of Earthquake-Triggered Slope Deformations

    NASA Astrophysics Data System (ADS)

    Heidari, A. A.; Mirvahabi, S. S.; Homayouni, S.

    2015-12-01

    Earthquake can pose earth-shattering health hazards to the natural slops and land infrastructures. One of the chief consequences of the earthquakes can be land sliding, which is instigated by durable shaking. In this research, an efficient procedure is proposed to assist the prediction of earthquake-originated slope displacements (EIDS). New hybrid SVM-CBBO strategy is implemented to predict the EIDS. For this purpose, first, chaos paradigm is combined with initialization of BBO to enhance the diversification and intensification capacity of the conventional BBO optimizer. Then, chaotic BBO is developed as the searching scheme to investigate the best values of SVR parameters. In this paper, it will be confirmed that how the new computing approach is effective in prediction of EIDS. The outcomes affirm that the SVR-BBO strategy with chaos can be employed effectively as a predicting tool for evaluating the EIDS.

  13. Fault failure with moderate earthquakes

    USGS Publications Warehouse

    Johnston, M.J.S.; Linde, A.T.; Gladwin, M.T.; Borcherdt, R.D.

    1987-01-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake (ML = 6.7, ?? = 51 km), the August 4, 1985, Kettleman Hills earthquake (ML = 5.5, ?? = 34 km), the April 1984 Morgan Hill earthquake (ML = 6.1, ?? = 55 km), the November 1984 Round Valley earthquake (ML = 5.8, ?? = 54 km), the January 14, 1978, Izu, Japan earthquake (ML = 7.0, ?? = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10-8), with borehole dilatometers (resolution 10-10) and a 3-component borehole strainmeter (resolution 10-9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure. ?? 1987.

  14. Landslide seismic magnitude

    NASA Astrophysics Data System (ADS)

    Lin, C. H.; Jan, J. C.; Pu, H. C.; Tu, Y.; Chen, C. C.; Wu, Y. M.

    2015-11-01

    Landslides have become one of the most deadly natural disasters on earth, not only due to a significant increase in extreme climate change caused by global warming, but also rapid economic development in topographic relief areas. How to detect landslides using a real-time system has become an important question for reducing possible landslide impacts on human society. However, traditional detection of landslides, either through direct surveys in the field or remote sensing images obtained via aircraft or satellites, is highly time consuming. Here we analyze very long period seismic signals (20-50 s) generated by large landslides such as Typhoon Morakot, which passed though Taiwan in August 2009. In addition to successfully locating 109 large landslides, we define landslide seismic magnitude based on an empirical formula: Lm = log ⁡ (A) + 0.55 log ⁡ (Δ) + 2.44, where A is the maximum displacement (μm) recorded at one seismic station and Δ is its distance (km) from the landslide. We conclude that both the location and seismic magnitude of large landslides can be rapidly estimated from broadband seismic networks for both academic and applied purposes, similar to earthquake monitoring. We suggest a real-time algorithm be set up for routine monitoring of landslides in places where they pose a frequent threat.

  15. Application of Dynamic Strains to Earthquake Source Characterization and Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Barbour, A. J.; Crowell, B. W.

    2015-12-01

    Borehole strainmeters can provide data that are useful for rapid earthquake source characterization, which is necessary for earthquake early warning. Our analysis of high-frequency (1-Hz) strains from 180 earthquakes which occurred between 2004 and 2012, recorded by 68 Plate Boundary Observatory (PBO) borehole strainmeter (BSM) stations, with moment magnitudes M ranging from 4.6 to 7.2, depths ranging from 12 km to 33 km, and hypocentral distances ranging from 13 km to 500 km; reveals that peak dynamic strains can be predicted, with high statistical confidence, from the magnitude of the earthquake and its hypocentral distance. Our regression model also holds for high-rate GPS derived strains during the 2011 M9 Tohoku-oki earthquake, from GEONET subnetworks located as close as 140 km, indicating that the model does not saturate for large earthquakes. Moreover, using linear mixed-effects regression, we show that the largest source of bias in the residual mean squared error in the magnitude-distance regression arises from effects associated with the source and/or propagation path, rather than with the station. For instance, earthquakes on the Blanco fracture zone produce dynamic strains with lower amplitudes on average than earthquakes around the Sierra microplate, indicating that dynamic strains are also affected by earthquake source parameters other than the magnitude; these source and path effects are not large enough to degrade the relationship between dynamic strain and magnitude and distance though. We also show that by including PBO stations into current source characterization efforts would enhance station density significantly in critical regions. That is, 32 (41%) of the BSMs are located within 280 km from the trench axis of the Cascadia subduction zone where, for example, the probability of a M9 earthquake within 50 years might be as high as 15% and the probability of a smaller but still damaging M8 event might be as high as 40% (Goldfinger, et al., 2012

  16. Lightning Activities and Earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Jann-Yenq

    2016-04-01

    The lightning activity is one of the key parameters to understand the atmospheric electric fields and/or currents near the Earth's surface as well as the lithosphere-atmosphere coupling during the earthquake preparation period. In this study, to see whether or not lightning activities are related to earthquakes, we statistically examine lightning activities 30 days before and after 78 land and 230 sea M>5.0 earthquakes in Taiwan during the 12-year period of 1993-2004. Lightning activities versus the location, depth, and magnitude of earthquakes are investigated. Results show that lightning activities tend to appear around the forthcoming epicenter and are significantly enhanced a few, especially 17-19, days before the M>6.0 shallow (depth D< 20 km) land earthquakes. Moreover, the size of the area around the epicenter with the statistical significance of lightning activity enhancement is proportional to the earthquake magnitude.

  17. Application Prospect of Environmental Isotopes and Tracing Techniques for Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    LIU, Yaowei; REN, Hongwei; WANG, Bo

    In recent years, with the progress of new observation techniques and related theories, the tracing technique of environmental isotopes has been widely applied to investigate the groundwater dynamic process. It's expected that the tracing technique of environmental isotopes will contribute to identifying the dynamic information of groundwater, which is associated with the earthquake gestation and occurrence. In this study, along with the characteristics of stable isotopes including 2H and 18O, as well as radioactive isotopes such as 3H and 14C, the tracing techniques are investigated. Especially, with the environmental isotopes, the activity rules of groundwater including the groundwater recharge, circulation depth of geothermal water, water-rock interaction, and groundwater age are deeply discussed. The study was carried out to analyze effects of the groundwater on its response to the stress-strain changes and to the tectonic activities. It's worthy to mention that the tracing technique of environmental isotopes can be applied to identify the information of seismic subsurface fluid, determine the intensity of seismicity, discuss the role of fluid in earthquake gestation, and estimate the earthquake-reflecting effectiveness of borehole.

  18. Satellite Relay Telemetry of Seismic Data in Earthquake Prediction and Control

    NASA Technical Reports Server (NTRS)

    Jackson, W. H.; Eaton, J. P.

    1971-01-01

    The Satellite Telemetry Earthquake Monitoring Program was started to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project. The principal advantages of the satellite relay system over commercial telephone or microwave systems were: (1) it could be made less prone to massive failure during a major earthquake; (2) it could be extended readily into undeveloped regions; and (3) it could provide flexible, uniform communications over large sections of major global tectonic zones. Fundamental characteristics of a communications system to cope with the large volume of raw data collected by a short-period seismograph network are discussed.

  19. Earthquake at 40 feet

    USGS Publications Warehouse

    Miller, G. J.

    1976-01-01

    The earthquake that struck the island of Guam on November 1, 1975, at 11:17 a.m had many unique aspects-not the least of which was the experience of an earthquake of 6.25 Richter magnitude while at 40 feet. My wife Bonnie, a fellow diver, Greg Guzman, and I were diving at Gabgab Beach in teh outer harbor of Apra Harbor, engaged in underwater phoyography when the earthquake struck. 

  20. The mass balance of earthquakes and earthquake sequences

    NASA Astrophysics Data System (ADS)

    Marc, O.; Hovius, N.; Meunier, P.

    2016-04-01

    Large, compressional earthquakes cause surface uplift as well as widespread mass wasting. Knowledge of their trade-off is fragmentary. Combining a seismologically consistent model of earthquake-triggered landsliding and an analytical solution of coseismic surface displacement, we assess how the mass balance of single earthquakes and earthquake sequences depends on fault size and other geophysical parameters. We find that intermediate size earthquakes (Mw 6-7.3) may cause more erosion than uplift, controlled primarily by seismic source depth and landscape steepness, and less so by fault dip and rake. Such earthquakes can limit topographic growth, but our model indicates that both smaller and larger earthquakes (Mw < 6, Mw > 7.3) systematically cause mountain building. Earthquake sequences with a Gutenberg-Richter distribution have a greater tendency to lead to predominant erosion, than repeating earthquakes of the same magnitude, unless a fault can produce earthquakes with Mw > 8 or more.

  1. An overview about the pre-earthquake signals observed in relation to Tohoku M9 EQ and the current status of EQ prediction study in Japan

    NASA Astrophysics Data System (ADS)

    Nagao, T.

    2012-12-01

    On 11 March 2011, M:9.0 Tohoku EQ, with a huge tsunami, occurred resulting in a devastation of the Pacific side of entire northeastern Japan. As of now, this EQ disaster turned into nuclear hazard raised by the Fukushima #1 nuclear power plant. Now "Fukushima" becomes very famous Japanese word like "Hiroshima". In the presentation, the author would like to summarize seismic, geodetic and electromagnetic pre-earthquake changes except ionospheric phenomena such as OLR and GPS-TEC anomalies. Seismic and geodetic anomalies: Concerning seismicity, many authors present seismic quiescence. Katasumata (2011, EPS, 63, 709-712) claimed more than 20 years seismic quiescence by using the JMA seismic catalog since 1965. The Institute of Statistical Mathematics also claimed notable seismic quiescence all over the Japanese island 15 years ago. Furthermore, the author's group applied the weighted coefficient method in time, space and magnitude, called the RTM method (Nagao et al, 2011, EPS, 63, 315-324). The RTM method also showed clear seismicity change almost two years before the EQ. Concerning b-value of the GR law, it many researchers stated that the b-value of the foreshock activity was very small (0.4). Tanaka (2012, GRL, 39, L00G26, doi:10.1029/2012GL051179) reported that strong long-term statistical correlations between tidally-induced stresses and earthquake occurrence times. The author considers this phenomenon was one of the most notable pre-earthquake changes before the EQ. Kato (2012, Science, 335, 705-708, doi: 10.1126/science.1215141) identified two distinct sequences of foreshocks migrating at rates of 2 to 10 kilometers per day along the trench axis toward the epicenter. According to the GPS base-line observations, GSI reported that non-steady state changes were started around 2003 in Tohoku region. Furthermore, large back-slip is also recognized around the epicentral area since 2007. Electromagnetic anomalies According to Hase (pers. comm.), after correction of

  2. Short-term forecasting of Taiwanese earthquakes using a universal model of fusion-fission processes.

    PubMed

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M A; Johnson, Neil F

    2014-01-10

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow.

  3. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  4. Researches on the Nankai trough mega thrust earthquake seismogenic zones using real time observing systems for advanced early warning systems and predictions

    NASA Astrophysics Data System (ADS)

    Kaneda, Yoshiyuki

    2015-04-01

    We recognized the importance of real time monitoring on Earthquakes and Tsunamis Based on lessons learned from 2004 Sumatra Earthquake/Tsunamis and 2011 East Japan Earthquake. We deployed DONET1 and are developing DONET2 as real time monitoring systems which are dense ocean floor networks around the Nankai trough seismogenic zone Southwestern Japan. Total observatories of DONE1 and DONET2 are 51 observatories equipped with multi kinds of sensors such as the accelerometer, broadband seismometer, pressure gauge, difference pressure gauge, hydrophone and thermometer in each observatory. These systems are indispensable for not only early warning of Earthquakes/ Tsunamis, but also researches on broadband crustal activities around the Nankai trough seismogenic zone for predictions. DONET1 detected offshore tsunamis 15 minutes earlier than onshore stations at the 2011 East Japan earthquake/tsunami. Furthermore, DONET1/DONET2 will be expected to monitor slow events such as low frequency tremors and slow earthquakes for the prediction researches. Finally, the integration of observations and simulation researches will contribute to estimate of seismic stage changes from the inter-seismic to pre seismic stage. I will introduce applications of DONET1/DONET2 data and advanced simulation researches.

  5. How Much Can the Total Aleatory Variability of Empirical Ground Motion Prediction Equations Be Reduced Using Physics-Based Earthquake Simulations?

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Wang, F.; Graves, R. W.; Callaghan, S.; Olsen, K. B.; Cui, Y.; Milner, K. R.; Juve, G.; Vahi, K.; Yu, J.; Deelman, E.; Gill, D.; Maechling, P. J.

    2015-12-01

    Ground motion prediction equations (GMPEs) in common use predict the logarithmic intensity of ground shaking, lnY, as a deterministic value, lnYpred(x), conditioned on a set of explanatory variables x plus a normally distributed random variable with a standard deviation σT. The latter accounts for the unexplained variability in the ground motion data used to calibrate the GMPE and is typically 0.5-0.7 in natural log units. Reducing this residual or "aleatory" variability is a high priority for seismic hazard analysis, because the probabilities of exceedance at high Y values go up rapidly with σT. adding costs to the seismic design of critical facilities to account for the prediction uncertainty. However, attempts to decrease σT by incorporating more explanatory variables to the GMPEs have been largely unsuccessful (e.g., Strasser et al., SRL, 2009). An alternative is to employ physics-based earthquake simulations that properly account for source directivity, basin effects, directivity-basin coupling, and other 3D complexities. We have explored the theoretical limits of this approach through an analysis of large (> 108) ensembles of 3D synthetic seismograms generated for the Los Angeles region by SCEC's CyberShake project using the new tool of averaging-based factorization (ABF, Wang & Jordan, BSSA, 2014). The residual variance obtained by applying GMPEs to the CyberShake dataset matches the frequency-dependence of σT obtained for the GMPE calibration dataset. The ABF analysis allows us to partition this variance into uncorrelated components representing source, path, and site effects. We show that simulations can potentially reduce σT by about one-third, which could lower the exceedance probabilities for high hazard levels at fixed x by orders of magnitude. Realizing this gain in forecasting probability would have a broad impact on risk-reduction strategies, especially for critical facilities such as large dams, nuclear power plants, and energy transportation

  6. Earthquakes, March-April, 1993

    USGS Publications Warehouse

    Person, Waverly J.

    1993-01-01

    Worldwide, only one major earthquake (7.0earthquake, a magnitude 7.2 shock, struck the Santa Cruz Islands region in the South Pacific on March 6. Earthquake-related deaths occurred in the Fiji Islands, China, and Peru.

  7. Earthquakes, March-April 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    Two major earthquakes (7.0-7.9) occurred during this reporting period: a magnitude 7.6 in Costa Rica on April 22 and a magntidue 7.0 in the USSR on April 29. Destructive earthquakes hit northern Peru on April 4 and 5. There were no destructive earthquakes in the United States during this period. 

  8. Earthquakes, May-June 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    In the United States, a magnitude 5.8 earthquake in southern California on June 28 killed two people and caused considerable damage. Strong earthquakes hit Alaska on May 1 and May 30; the May 1 earthquake caused some minor damage. 

  9. Earthquakes, September-October 1978

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    The months of September and October were somewhat quiet seismically speaking. One major earthquake, magnitude (M) 7.7 occurred in Iran on September 16. In Germany, a magntidue 5.0 earthquake caused damage and considerable alarm to many people in parts of that country. In the United States, the largest earthquake occurred along the California-Nevada border region. 

  10. Earthquakes, September-October 1993

    USGS Publications Warehouse

    Person, W.J.

    1993-01-01

    The fatalities in the United States were caused by two earthquakes in southern Oregon on September 21. These earthquakes, both with magnitude 6.0 and separated in time by about 2 hrs, led to the deaths of two people. One of these deaths was apparently due to a heart attack induced by the earthquake

  11. Earthquakes, November-December 1992

    USGS Publications Warehouse

    Person, W.J.

    1993-01-01

    There were two major earthquakes (7.0≤M<8.0) during the last two months of the year, a magntidue 7.5 earthquake on December 12 in the Flores region, Indonesia, and a magnitude 7.0 earthquake on December 20 in the Banda Sea. Earthquakes caused fatalities in China and Indonesia. The greatest number of deaths (2,500) for the year occurred in Indonesia. In Switzerland, six people were killed by an accidental explosion recoreded by seismographs. In teh United States, a magnitude 5.3 earthquake caused slight damage at Big Bear in southern California. 

  12. Earthquakes; March-April 1975

    USGS Publications Warehouse

    Person, W.J.

    1975-01-01

    There were no major earthquakes (magnitude 7.0-7.9) in March or April; however, there were earthquake fatalities in Chile, Iran, and Venezuela and approximately 35 earthquake-related injuries were reported around the world. In the United States a magnitude 6.0 earthquake struck the Idaho-Utah border region. Damage was estimated at about a million dollars. The shock was felt over a wide area and was the largest to hit the continental Untied States since the San Fernando earthquake of February 1971. 

  13. Real-time forecasts of tomorrow's earthquakes in California

    USGS Publications Warehouse

    Gerstenberger, M.C.; Wiemer, S.; Jones, L.M.; Reasenberg, P.A.

    2005-01-01

    Despite a lack of reliable deterministic earthquake precursors, seismologists have significant predictive information about earthquake activity from an increasingly accurate understanding of the clustering properties of earthquakes. In the past 15 years, time-dependent earthquake probabilities based on a generic short-term clustering model have been made publicly available in near-real time during major earthquake sequences. These forecasts describe the probability and number of events that are, on average, likely to occur following a mainshock of a given magnitude, but are not tailored to the particular sequence at hand and contain no information about the likely locations of the aftershocks. Our model builds upon the basic principles of this generic forecast model in two ways: it recasts the forecast in terms of the probability of strong ground shaking, and it combines an existing time-independent earthquake occurrence model based on fault data and historical earthquakes with increasingly complex models describing the local time-dependent earthquake clustering. The result is a time-dependent map showing the probability of strong shaking anywhere in California within the next 24 hours. The seismic hazard modelling approach we describe provides a better understanding of time-dependent earthquake hazard, and increases its usefulness for the public, emergency planners and the media.

  14. Maximum Magnitude in Relation to Mapped Fault Length and Fault Rupture

    NASA Astrophysics Data System (ADS)

    Black, N.; Jackson, D.; Rockwell, T.

    2004-12-01

    Earthquake hazard zones are highlighted using known fault locations and an estimate of the fault's maximum magnitude earthquake. Magnitude limits are commonly determined from fault geometry, which is dependent on fault length. Over the past 30 years it has become apparent that fault length is often poorly constrained and that a single event can rupture across several individual fault segments. In this study fault geometries are analyzed before and after several moderate to large magnitude earthquakes to determine how well fault length can accurately assess seismic hazard. Estimates of future earthquake magnitudes are often inferred from prior determinations of fault length, but use magnitude regressions based on rupture length. However, rupture length is not always limited to the previously estimated fault length or contained on a single fault. Therefore, the maximum magnitude for a fault may be underestimated, unless the geometry and segmentation of faulting is completely understood. This study examines whether rupture/fault length can be used to accurately predict the maximum magnitude for a given fault. We examine earthquakes greater than 6.0 that occurred after 1970 in Southern California. Geologic maps, fault evaluation reports, and aerial photos that existed prior to these earthquakes are used to obtain the pre-earthquake fault lengths. Pre-earthquake fault lengths are compared with rupture lengths to determine: 1) if fault lengths are the same before and after the ruptures and 2) to constrain the geology and geometry of ruptures that propagated beyond the originally recognized endpoints of a mapped fault. The ruptures examined in this study typically follow one of the following models. The ruptures are either: 1) contained within the dimensions of the original fault trace, 2) break through one or both end points of the originally mapped fault trace, or 3) break through multiple faults, connecting segments into one large fault line. No rupture simply broke a

  15. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  16. Prediction of post-earthquake depressive and anxiety symptoms: a longitudinal resting-state fMRI study.

    PubMed

    Long, Jinyi; Huang, Xiaoqi; Liao, Yi; Hu, Xinyu; Hu, Junmei; Lui, Su; Zhang, Rui; Li, Yuanqing; Gong, Qiyong

    2014-01-01

    Neurobiological markers of stress symptom progression for healthy survivors from a disaster (e.g., an earthquake) would greatly help with early intervention to prevent the development of stress-related disorders. However, the relationship between the neurobiological alterations and the symptom progression over time is unclear. Here, we examined 44 healthy survivors of the Wenchuan earthquake in China in a longitudinal resting-state fMRI study to observe the alterations of brain functions related to depressive or anxiety symptom progression. Using multi-variate pattern analysis to the fMRI data, we successfully predicted the depressive or anxiety symptom severity for these survivors in short- (25 days) and long-term (2 years) and the symptom severity changes over time. Several brain areas (e.g., the frontolimbic and striatal areas) and the functional connectivities located within the fronto-striato-thalamic and default-mode networks were found to be correlated with the symptom progression and might play important roles in the adaptation to trauma. PMID:25236674

  17. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    USGS Publications Warehouse

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  18. Earthquakes, July-August 1992

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    There were two major earthquakes (7.0≤M<8.0) during this reporting period. A magnitude 7.5 earthquake occurred in Kyrgyzstan on August 19 and a magnitude 7.0 quake struck the Ascension Island region on August 28. In southern California, aftershocks of the magnitude 7.6 earthquake on June 28, 1992, continued. One of these aftershocks caused damage and injuries, and at least one other aftershock caused additional damage. Earthquake-related fatalities were reportred in Kyrgzstan and Pakistan. 

  19. Protein charge ladders reveal that the net charge of ALS-linked superoxide dismutase can be different in sign and magnitude from predicted values.

    PubMed

    Shi, Yunhua; Abdolvahabi, Alireza; Shaw, Bryan F

    2014-10-01

    This article utilized "protein charge ladders"-chemical derivatives of proteins with similar structure, but systematically altered net charge-to quantify how missense mutations that cause amyotrophic lateral sclerosis (ALS) affect the net negative charge (Z) of superoxide dismutase-1 (SOD1) as a function of subcellular pH and Zn(2+) stoichiometry. Capillary electrophoresis revealed that the net charge of ALS-variant SOD1 can be different in sign and in magnitude-by up to 7.4 units per dimer at lysosomal pH-than values predicted from standard pKa values of amino acids and formal oxidation states of metal ions. At pH 7.4, the G85R, D90A, and G93R substitutions diminished the net negative charge of dimeric SOD1 by up to +2.29 units more than predicted; E100K lowered net charge by less than predicted. The binding of a single Zn(2+) to mutant SOD1 lowered its net charge by an additional +2.33 ± 0.01 to +3.18 ± 0.02 units, however, each protein regulated net charge when binding a second, third, or fourth Zn(2+) (ΔZ < 0.44 ± 0.07 per additional Zn(2+) ). Both metalated and apo-SOD1 regulated net charge across subcellular pH, without inverting from negative to positive at the theoretical pI. Differential scanning calorimetry, hydrogen-deuterium exchange, and inductively coupled plasma mass spectrometry confirmed that the structure, stability, and metal content of mutant proteins were not significantly affected by lysine acetylation. Measured values of net charge should be used when correlating the biophysical properties of a specific ALS-variant SOD1 protein with its observed aggregation propensity or clinical phenotype.

  20. The intensities and magnitudes of volcanic eruptions

    USGS Publications Warehouse

    Sigurdsson, H.

    1991-01-01

    Ever since 1935, when C.F Richter devised the earthquake magnitude scale that bears his name, seismologists have been able to view energy release from earthquakes in a systematic and quantitative manner. The benefits have been obvious in terms of assessing seismic gaps and the spatial and temporal trends of earthquake energy release. A similar quantitative treatment of volcanic activity is of course equally desirable, both for gaining a further understanding of the physical principles of volcanic eruptions and for volcanic-hazard assessment. A systematic volcanologic data base would be of great value in evaluating such features as volcanic gaps, and regional and temporal trends in energy release.  

  1. Calculation of the Rate of M>6.5 Earthquakes for California and Adjacent Portions of Nevada and Mexico

    USGS Publications Warehouse

    Frankel, Arthur; Mueller, Charles

    2008-01-01

    One of the key issues in the development of an earthquake recurrence model for California and adjacent portions of Nevada and Mexico is the comparison of the predicted rates of earthquakes with the observed rates. Therefore, it is important to make an accurate determination of the observed rate of M>6.5 earthquakes in California and the adjacent region. We have developed a procedure to calculate observed earthquake rates from an earthquake catalog, accounting for magnitude uncertainty and magnitude rounding. We present a Bayesian method that corrects for the effect of the magnitude uncertainty in calculating the observed rates. Our recommended determination of the observed rate of M>6.5 in this region is 0.246 ? 0.085 (for two sigma) per year, although this rate is likely to be underestimated because of catalog incompleteness and this uncertainty estimate does not include all sources of uncertainty.

  2. Earthquakes; January-February, 1979

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    The first major earthquake (magnitude 7.0 to 7.9) of the year struck in southeastern Alaska in a sparsely populated area on February 28. On January 16, Iran experienced the first destructive earthquake of the year causing a number of casualties and considerable damage. Peru was hit by a destructive earthquake on February 16 that left casualties and damage. A number of earthquakes were experienced in parts of the Untied States, but only minor damage was reported. 

  3. Earthquakes, September-October 1980

    USGS Publications Warehouse

    Person, W.J.

    1981-01-01

    There were two major (magnitudes 7.0-7.9) earthquakes during this reporting period; a magnitude (M) 7.3 in Algeria where many people were killed or injured and extensive damage occurred, and an M=7.2 in the Loyalty Islands region of the South Pacific. Japan was struck by a damaging earthquake on September 24, killing two people and causing injuries. There were no damaging earthquakes in the United States. 

  4. Earthquakes, November-December 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    There were three major earthquakes (7.0-7.9) during the last two months of the year: a magntidue 7.0 on November 19 in Columbia, a magnitude 7.4 in the Kuril Islands on December 22, and a magnitude 7.1 in the South Sandwich Islands on December 27. Earthquake-related deaths were reported in Colombia, Yemen, and Iran. there were no significant earthquakes in the United States during this reporting period. 

  5. Monitoring of the stress state variations of the Southern California for the purpose of earthquake prediction

    NASA Astrophysics Data System (ADS)

    Gokhberg, M.; Garagash, I.; Bondur, V.; Steblov, G. M.

    2014-12-01

    The three-dimensional geomechanical model of Southern California was developed, including a mountain relief, fault tectonics and characteristic internal features such as the roof of the consolidated crust and Moho surface. The initial stress state of the model is governed by the gravitational forces and horizontal tectonic motions estimated from GPS observations. The analysis shows that the three-dimensional geomechanical models allows monitoring of the changes in the stress state during the seismic process in order to constrain the distribution of the future places with increasing seismic activity. This investigation demonstrates one of possible approach to monitor upcoming seismicity for the periods of days - weeks - months. Continuous analysis of the stress state was carried out during 2009-2014. Each new earthquake with М~1 and above from USGS catalog was considered as the new defect of the Earth crust which has some definite size and causes redistribution of the stress state. Overall calculation technique was based on the single function of the Earth crust damage, recalculated each half month. As a result each half month in the upper crust layers and partially in the middle layers we revealed locations of the maximal values of the stress state parameters: elastic energy density, shear stress, proximity of the earth crust layers to their strength limit. All these parameters exhibit similar spatial and temporal distribution. How follows from observations all four strongest events with М ~ 5.5-7.2 occurred in South California during the analyzed period were prefaced by the parameters anomalies in peculiar advance time of weeks-months in the vicinity of 10-50 km from the upcoming earthquake. After the event the stress state source disappeared. The figure shows migration of the maximums of the stress state variations gradients (parameter D) in the vicinity of the epicenter of the earthquake 04.04.2010 with М=7.2 in the period of 01.01.2010-01.05.2010. Grey lines

  6. One research from turkey on groundwater- level changes related earthquake

    NASA Astrophysics Data System (ADS)

    Kirmizitas, H.; Göktepe, G.

    2003-04-01

    Groundwater levels are recorded by limnigraphs in drilling wells in order to determine groundwater potential accurately and reliable under hydrogeological studies in Turkey State Haydraulic Works (DSI) set the limnigraphs to estimate mainly groundwater potential. Any well is drilled to determine and to obtain data on water level changes related earthquake up today. The main purpose of these studies are based on groundwater potential and to expose the hydrodynamic structure of an aquifer. In this study, abnormal oscillations, water rising and water drops were observed on graphs which is related with water level changes in groundwater. These observations showed that, some earthquakes has been effective on water level changes. There is a distance ranging to 2000 km between this epicentral and water wells. Water level changes occur in groundwater bearing layers that could be consisting of grained materials such as, alluvium or consolidated rocks such as, limestones. The biggest water level change is ranging to 1,48 m on diagrams and it is recorded as oscillation movement. Water level changes related earthquake are observed in different types of movements below in this research. 1-Rise-drop oscillation changes on same point. 2-Water level drop in certain periods or permanent periods after earthquakes. 3-Water level rise in certain periods or permanent periods after earthquakes. (For example, during Gölcük Earthquake with magnitude of 7.8 on August, 17, 1999 one artesian occured in DSI well ( 49160 numbered ) in Adapazari, Dernekkiri Village. Groundwater level changes might easily be changed because of atmosferic pressure that comes in first range, precipitation, irrigation or water pumping. Owing to relate groundwater level changes with earthquake on any time, such changes should be observed accurately, carefully and at right time. Thus, first of all, the real reason of this water level changes