Science.gov

Sample records for earthquake magnitude prediction

  1. A probabilistic neural network for earthquake magnitude prediction.

    PubMed

    Adeli, Hojjat; Panakkat, Ashif

    2009-09-01

    A probabilistic neural network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0.

  2. Are Earthquake Magnitudes Clustered?

    SciTech Connect

    Davidsen, Joern; Green, Adam

    2011-03-11

    The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.

  3. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  4. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries

    PubMed Central

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

  5. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries.

    PubMed

    Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.

  6. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  7. Neural network models for earthquake magnitude prediction using multiple seismicity indicators.

    PubMed

    Panakkat, Ashif; Adeli, Hojjat

    2007-02-01

    Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.

  8. Near-Source Recordings of Small and Large Earthquakes: Magnitude Predictability only for Medium and Small Events

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2015-12-01

    The feasibility of Earthquake Early Warning (EEW) applications has revived the discussion on whether earthquake rupture development follows deterministic principles or not. If it does, it may be possible to predict final earthquake magnitudes while the rupture is still developing. EEW magnitude estimation schemes, most of which are based on 3-4 seconds of near-source p-wave data, have been shown to work well for small to moderate size earthquakes. In this magnitude range, the used time window is larger than the source durations of the events. Whether the magnitude estimation schemes also work for events in which the source duration exceeds the estimation time window, however, remains debated. In our study we have compiled an extensive high-quality data set of near-source seismic recordings. We search for waveform features that could be diagnostic of final event magnitudes in a predictive sense. We find that the onsets of large (M7+) events are statistically indistinguishable from those of medium sized events (M5.5-M7). Significant differences arise only once the medium size events terminate. This observation suggests that EEW relevant magnitude estimates are largely observational, rather than predictive, and that whether a medium size event becomes a large one is not determined at the rupture onset. As a consequence, early magnitude estimates for large events are minimum estimates, a fact that has to be taken into account in EEW alert messaging and response design.

  9. Seismic Network Performance Estimation: Comparing Predictions of Magnitude of Completeness and Location Accuracy to Observations from an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Greig, D. W.; Ackerley, N. J.

    2014-12-01

    The design of seismic networks for the monitoring of induced seismicity is of critical importance. The recent introduction of regulations in various locations around the world (with more upcoming) has created a need for a priori confirmation that certain performance standards are met. We develop a tool to assess two key measures of network performance without an earthquake catalogue: magnitude of completeness and location accuracy. Site noise measurements are taken at existing seismic stations or as part of a noise survey. We then interpolate between measured values to determine a noise map for the entire region. The site noise is then summed with the instrument noise to determine the effective station noise at each of the proposed station locations. Location accuracy is evaluated by generating a covariance matrix that represents the error ellipsoid from the travel time derivatives (Peters and Crosson, 1972). To determine the magnitude of completeness we assume isotropic radiation and mandate a minimum signal to noise ratio for detection. For every gridpoint, we compute the Brune spectra for synthetic events and iterate to determine the smallest magnitude event that can be detected by at least four stations. We apply this methodology to an example network. We predict the magnitude of completeness and the location accuracy and compare the predicted values to observed values generated from the existing earthquake catalogue for the network. We discuss the effects of hypothetical station additions and removals on network performance to simulate network expansions and station failures. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables prediction of network performance even for a purely hypothetical seismic network. This allows the operators of networks for induced seismicity monitoring to be confident that performance criteria are met from day one of operations.

  10. Earthquake prediction

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    The state of the art in earthquake prediction is discussed. Short-term prediction based on seismic precursors, changes in the ratio of compressional velocity to shear velocity, tilt and strain precursors, electromagnetic precursors, hydrologic phenomena, chemical monitors, and animal behavior is examined. Seismic hazard assessment is addressed, and the applications of dynamical systems to earthquake prediction are discussed.

  11. Nucleation process of magnitude 2 repeating earthquakes on the San Andreas Fault predicted by rate-and-state fault models with SAFOD drill core data

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Carpenter, Brett M.; Nielsen, Stefan B.

    2017-01-01

    Recent laboratory shear-slip experiments conducted on a nominally flat frictional interface reported the intriguing details of a two-phase nucleation of stick-slip motion that precedes the dynamic rupture propagation. This behavior was subsequently reproduced by a physics-based model incorporating laboratory-derived rate-and-state friction laws. However, applying the laboratory and theoretical results to the nucleation of crustal earthquakes remains challenging due to poorly constrained physical and friction properties of fault zone rocks at seismogenic depths. Here we apply the same physics-based model to simulate the nucleation process of crustal earthquakes using unique data acquired during the San Andreas Fault Observatory at Depth (SAFOD) experiment and new and existing measurements of friction properties of SAFOD drill core samples. Using this well-constrained model, we predict what the nucleation phase will look like for magnitude ˜2 repeating earthquakes on segments of the San Andreas Fault at a 2.8 km depth. We find that despite up to 3 orders of magnitude difference in the physical and friction parameters and stress conditions, the behavior of the modeled nucleation is qualitatively similar to that of laboratory earthquakes, with the nucleation consisting of two distinct phases. Our results further suggest that precursory slow slip associated with the earthquake nucleation phase may be observable in the hours before the occurrence of the magnitude ˜2 earthquakes by strain measurements close (a few hundred meters) to the hypocenter, in a position reached by the existing borehole.

  12. Earthquake prediction, societal implications

    NASA Astrophysics Data System (ADS)

    Aki, Keiiti

    1995-07-01

    "If I were a brilliant scientist, I would be working on earthquake prediction." This is a statement from a Los Angeles radio talk show I heard just after the Northridge earthquake of January 17, 1994. Five weeks later, at a monthly meeting of the Southern California Earthquake Center (SCEC), where more than two hundred scientists and engineers gathered to exchange notes on the earthquake, a distinguished French geologist who works on earthquake faults in China envied me for working now in southern California. This place is like northeastern China 20 years ago, when high seismicity and research activities led to the successful prediction of the Haicheng earthquake of February 4, 1975 with magnitude 7.3. A difficult question still haunting us [Aki, 1989] is whether the Haicheng prediction was founded on the physical reality of precursory phenomena or on the wishful thinking of observers subjected to the political pressure which encouraged precursor reporting. It is, however, true that a successful life-saving prediction like the Haicheng prediction can only be carried out by the coordinated efforts of decision makers and physical scientists.

  13. Maximum magnitude earthquakes induced by fluid injection

    NASA Astrophysics Data System (ADS)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  14. Strong motion duration and earthquake magnitude relationships

    SciTech Connect

    Salmon, M.W.; Short, S.A.; Kennedy, R.P.

    1992-06-01

    Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.

  15. Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2016-07-01

    We have obtained new results in the statistical analysis of global earthquake catalogues with special attention to the largest earthquakes, and we examined the statistical behaviour of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the `Global Earthquake Activity Rate 1' model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 × 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalogue from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a `General Earthquake Model' (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalogue of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalogue of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation

  16. Precise Relative Earthquake Magnitudes from Cross Correlation

    SciTech Connect

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  17. An empirical evolutionary magnitude estimation for early warning of earthquakes

    NASA Astrophysics Data System (ADS)

    Chen, Da-Yi; Wu, Yih-Min; Chin, Tai-Lin

    2017-03-01

    The earthquake early warning (EEW) system is difficult to provide consistent magnitude estimate in the early stage of an earthquake occurrence because only few stations are triggered and few seismic signals are recorded. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters using the initial portion of the recorded waveforms after P-wave arrival. However, for a large-magnitude earthquake (Mw > 7.0), the time to complete the whole ruptures resulted from the corresponding fault may be very long. The magnitude estimations may not be correctly predicted by the initial portion of the seismograms. To estimate the magnitude of a large earthquake in real-time, the amplitude parameters should be updated with ongoing waveforms instead of adopting amplitude contents in a predefined fixed-length time window, since it may underestimate magnitude for large-magnitude events. In this paper, we propose a fast, robust and less-saturated approach to estimate earthquake magnitudes. The EEW system will initially give a lower-bound of the magnitude in a time window with a few seconds and then update magnitude with less saturation by extending the time window. Here we compared two kinds of time windows for measuring amplitudes. One is P-wave time window (PTW) after P-wave arrival; the other is whole-wave time window after P-wave arrival (WTW), which may include both P and S wave. One to ten second time windows for both PTW and WTW are considered to measure the peak ground displacement from the vertical component of the waveforms. Linear regression analysis are run at each time step (1- to 10-s time interval) to find the empirical relationships among peak ground displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 in Taiwan with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that considering WTW to estimate magnitudes has smaller standard deviation than PTW. The

  18. The nature of earthquake prediction

    USGS Publications Warehouse

    Lindh, A.G.

    1991-01-01

    Earthquake prediction is inherently statistical. Although some people continue to think of earthquake prediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear for at least a decade that this is an unrealistic and unreasonable definition. the reality is that earthquake prediction starts from the long-term forecasts of place and magnitude, with very approximate time constraints, and progresses, at least in principle, to a gradual narrowing of the time window as data and understanding permit. Primitive long-term forecasts are clearly possible at this time on a few well-characterized fault systems. Tightly focuses monitoring experiments aimed at short-term prediction are already underway in Parkfield, California, and in the Tokai region in Japan; only time will tell how much progress will be possible. 

  19. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  20. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    values for different β-values. For magnitude larger than 8.5, the turbidite data are consistent with all three TGR models. For smaller magnitudes, the TGR models predict a higher rate than the paleoseismic data show. The discrepancy can be attributed to uncertainties in the paleoseismic magnitudes, the potential incompleteness of the paleoseismic record for smaller events, or temporal variations of the seismicity. Nevertheless, our results show that for this zone, earthquake of m 8.8×0.2 are expected over a 500-year period, m 9.0×0.2 are expected over a 1000-year period, and m 9.3×0.2 are expected over a 10,000-year period.

  1. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, S.E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  2. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  3. The parkfield, california, earthquake prediction experiment.

    PubMed

    Bakun, W H; Lindh, A G

    1985-08-16

    Five moderate (magnitude 6) earthquakes with similar features have occurred on the Parkfield section of the San Andreas fault in central California since 1857. The next moderate Parkfield earthquake is expected to occur before 1993. The Parkfield prediction experiment is designed to monitor the details of the final stages of the earthquake preparation process; observations and reports of seismicity and aseismic slip associated with the last moderate Parkfield earthquake in 1966 constitute much of the basis of the design of the experiment.

  4. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  5. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  6. Characteristic magnitude of subduction earthquake and upper plate stiffness

    NASA Astrophysics Data System (ADS)

    Sakaguchi, A.; Yamamoto, Y.; Hashimoto, Y.; Harris, R. N.; Vannucchi, P.; Petronotis, K. E.

    2013-12-01

    recurrence interval and event displacement varies with the stiffness of the system. We propose that an important factor influencing the characteristic magnitude of large subduction earthquakes and recurrence intervals is stiffness of the upper plate. This model predicts that event displacement, and thus magnitude, is smaller at Costa Rica than at Nankai because the upper plate is stiffer at Costa Rica. This hypothesis can be tested based on elastic parameters estimated from seismic data and physical properties of core samples obtained from deep drilling.

  7. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  8. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  9. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  10. Threshold magnitude for Ionospheric TEC response to earthquakes

    NASA Astrophysics Data System (ADS)

    Perevalova, N. P.; Sankov, V. A.; Astafyeva, E. I.; Zhupityaeva, A. S.

    2014-02-01

    We have analyzed ionospheric response to earthquakes with magnitudes of 4.1-8.8 which occurred under quiet geomagnetic conditions in different regions of the world (the Baikal region, Kuril Islands, Japan, Greece, Indonesia, China, New Zealand, Salvador, and Chile). This investigation relied on measurements of total electron content (TEC) variations made by ground-based dual-frequency GPS receivers. To perform the analysis, we selected earthquakes with permanent GPS stations installed close by. Data processing has revealed that after 4.1-6.3-magnitude earthquakes wave disturbances in TEC variations are undetectable. We have thoroughly analyzed publications over the period of 1965-2013 which reported on registration of wave TIDs after earthquakes. This analysis demonstrated that the magnitude of the earthquakes having a wave response in the ionosphere was no less than 6.5. Based on our results and on the data from other researchers, we can conclude that there is a threshold magnitude (near 6.5) below which there are no pronounced earthquake-induced wave TEC disturbances. The probability of detection of post-earthquake TIDs with a magnitude close to the threshold depends strongly on geophysical conditions. In addition, reliable identification of the source of such TIDs generally requires many GPS stations in an earthquake zone. At low magnitudes, seismic energy is likely to be insufficient to generate waves in the neutral atmosphere which are able to induce TEC disturbances observable at the level of background fluctuations.

  11. Intermediate-term earthquake prediction.

    PubMed Central

    Keilis-Borok, V I

    1996-01-01

    An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems. Images Fig. 1 Fig. 2 Fig. 4 Fig. 5 PMID:11607660

  12. Intermediate-term earthquake prediction.

    PubMed

    Keilis-Borok, V I

    1996-04-30

    An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems.

  13. Source time function properties indicate a strain drop independent of earthquake depth and magnitude.

    PubMed

    Vallée, Martin

    2013-01-01

    The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes.

  14. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  15. On Earthquake Prediction in Japan

    PubMed Central

    UYEDA, Seiya

    2013-01-01

    Japan’s National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

  16. On earthquake prediction in Japan.

    PubMed

    Uyeda, Seiya

    2013-01-01

    Japan's National Project for Earthquake Prediction has been conducted since 1965 without success. An earthquake prediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author's view, are mainly interested in securing funds for seismology - on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquake prediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors.

  17. Correlating precursory declines in groundwater radon with earthquake magnitude.

    PubMed

    Kuo, T

    2014-01-01

    Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible.

  18. The October 1992 Parkfield, California, earthquake prediction

    USGS Publications Warehouse

    Langbein, J.

    1992-01-01

    A magnitude 4.7 earthquake occurred near Parkfield, California, on October 20, 992, at 05:28 UTC (October 19 at 10:28 p.m. local or Pacific Daylight Time).This moderate shock, interpreted as the potential foreshock of a damaging earthquake on the San Andreas fault, triggered long-standing federal, state and local government plans to issue a public warning of an imminent magnitude 6 earthquake near Parkfield. Although the predicted earthquake did not take place, sophisticated suites of instruments deployed as part of the Parkfield Earthquake Prediction Experiment recorded valuable data associated with an unusual series of events. this article describes the geological aspects of these events, which occurred near Parkfield in October 1992. The accompnaying article, an edited version of a press conference b Richard Andrews, the Director of the California Office of Emergency Service (OES), describes governmental response to the prediction.   

  19. Earthquake Prediction is Coming

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)

  20. Predicting Strong Ground-Motion Seismograms for Magnitude 9 Cascadia Earthquakes Using 3D Simulations with High Stress Drop Sub-Events

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Wirth, E. A.; Stephenson, W. J.; Moschetti, M. P.; Ramirez-Guzman, L.

    2015-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for magnitude 9.0 earthquakes on the Cascadia subduction zone by combining synthetics from simulations with a 3D velocity model at low frequencies (≤ 1 Hz) with stochastic synthetics at high frequencies (≥ 1 Hz). We use a compound rupture model consisting of a set of M8 high stress drop sub-events superimposed on a background slip distribution of up to 20m that builds relatively slowly. The 3D simulations were conducted using a finite difference program and the finite element program Hercules. The high-frequency (≥ 1 Hz) energy in this rupture model is primarily generated in the portion of the rupture with the M8 sub-events. In our initial runs, we included four M7.9-8.2 sub-events similar to those that we used to successfully model the strong ground motions recorded from the 2010 M8.8 Maule, Chile earthquake. At periods of 2-10 s, the 3D synthetics exhibit substantial amplification (about a factor of 2) for sites in the Puget Lowland and even more amplification (up to a factor of 5) for sites in the Seattle and Tacoma sedimentary basins, compared to rock sites outside of the Puget Lowland. This regional and more localized basin amplification found from the simulations is supported by observations from local earthquakes. There are substantial variations in the simulated M9 time histories and response spectra caused by differences in the hypocenter location, slip distribution, down-dip extent of rupture, coherence of the rupture front, and location of sub-events. We examined the sensitivity of the 3D synthetics to the velocity model of the Seattle basin. We found significant differences in S-wave focusing and surface wave conversions between a 3D model of the basin from a spatially-smoothed tomographic inversion of Rayleigh-wave phase velocities and a model that has an abrupt southern edge of the Seattle basin, as observed in seismic reflection profiles.

  1. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  2. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  3. Calibration of magnitude scales for earthquakes of the Mediterranean

    NASA Astrophysics Data System (ADS)

    Gardini, Domenico; di Donato, Maria; Boschi, Enzo

    In order to provide the tools for uniform size determination for Mediterranean earthquakes over the last 50-year period of instrumental seismology, we have regressed the magnitude determinations for 220 earthquakes of the European-Mediterranean region over the 1977-1991 period, reported by three international centres, 11 national and regional networks and 101 individual stations and observatories, using seismic moments from the Harvard CMTs. We calibrate M(M0) regression curves for the magnitude scales commonly used for Mediterranean earthquakes (ML, MWA, mb, MS, MLH, MLV, MD, M); we also calibrate static corrections or specific regressions for individual observatories and we verify the reliability of the reports of different organizations and observatories. Our analysis shows that the teleseismic magnitudes (mb, MS) computed by international centers (ISC, NEIC) provide good measures of earthquake size, with low standard deviations (0.17-0.23), allowing one to regress stable regional calibrations with respect to the seismic moment and to correct systematic biases such as the hypocentral depth for MS and the radiation pattern for mb; while mb is commonly reputed to be an inadequate measure of earthquake size, we find that the ISC mb is still today the most precise measure to use to regress MW and M0 for earthquakes of the European-Mediterranean region; few individual observatories report teleseismic magnitudes requiring specific dynamic calibrations (BJI, MOS). Regional surface-wave magnitudes (MLV, MLH) reported in Eastern Europe generally provide reliable measures of earthquake size, with standard deviations often in the 0.25-0.35 range; the introduction of a small (±0.1-0.2) static station correction is sometimes required. While the Richter magnitude ML is the measure of earthquake size most commonly reported in the press whenever an earthquake strikes, we find that ML has not been computed in the European-Mediterranean in the last 15 years; the reported local

  4. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  5. Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2013-12-01

    earthquakes. The Nuclear Regulation Authority, established in 2012, makes independent decisions based on the latest scientific knowledge. They assigned maximum credible earthquake magnitude of 9.6 for Nankai an Ryukyu troughs, 9.6 for Kuirl-Japan trench, and 9.2 for Izu-Bonin trench.

  6. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  7. Localization of intermediate-term earthquake prediction

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Keilis-Borok, V. I.; Smith, S. W.

    1990-11-01

    Relative seismic quiescence within a region which has already been diagnosed as having entered a "Time of Increased Probability" (TIP) for the occurrence of a strong earthquake can be used to refine the locality in which the earthquake may be expected to occur. A simple algorithm with parameters fitted from the data in Northern California preceding the 1980 magnitude 7.0 earthquake offshore from Eureka depicts relative quiescence within the region of a TTP. The procedure was tested, without readaptation of parameters, on 17 other strong earthquake occurrences in North America, Japan, and Eurasia, most of which were in regions for which a TIP had been previously diagnosed. The localization algorithm successfully outlined a region within which the subsequent earthquake occurred for 16 of these 17 strong earthquakes. The area of prediction in each case was reduced significantly, ranging between 7% and 25% of the total area covered by the TIP.

  8. Moment Magnitude ( M W) and Local Magnitude ( M L) Relationship for Earthquakes in Northeast India

    NASA Astrophysics Data System (ADS)

    Baruah, Santanu; Baruah, Saurabh; Bora, P. K.; Duarah, R.; Kalita, Aditya; Biswas, Rajib; Gogoi, N.; Kayal, J. R.

    2012-11-01

    An attempt has been made to examine an empirical relationship between moment magnitude ( M W) and local magnitude ( M L) for the earthquakes in the northeast Indian region. Some 364 earthquakes that were recorded during 1950-2009 are used in this study. Focal mechanism solutions of these earthquakes include 189 Harvard-CMT solutions ( M W ≥ 4.0) for the period 1976-2009, 61 published solutions and 114 solutions obtained for the local earthquakes (2.0 ≤ M L ≤ 5.0) recorded by a 27-station permanent broadband network during 2001-2009 in the region. The M W- M L relationships in seven selected zones of the region are determined by linear regression analysis. A significant variation in the M W- M L relationship and its zone specific dependence are reported here. It is found that M W is equivalent to M L with an average uncertainty of about 0.13 magnitude units. A single relationship is, however, not adequate to scale the entire northeast Indian region because of heterogeneous geologic and geotectonic environments where earthquakes occur due to collisions, subduction and complex intra-plate tectonics.

  9. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  10. The effects of earthquake measurement concepts and magnitude anchoring on individuals' perceptions of earthquake risk

    USGS Publications Warehouse

    Celsi, R.; Wolfinbarger, M.; Wald, D.

    2005-01-01

    The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.

  11. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  12. Radiocarbon test of earthquake magnitude at the Cascadia subduction zone

    USGS Publications Warehouse

    Atwater, B.F.; Stuiver, M.; Yamaguchi, D.K.

    1991-01-01

    THE Cascadia subduction zone, which extends along the northern Pacific coast of North America, might produce earthquakes of magnitude 8 or 9 ('great' earthquakes) even though it has not done so during the past 200 years of European observation 1-7. Much of the evidence for past Cascadia earthquakes comes from former meadows and forests that became tidal mudflats owing to abrupt tectonic subsidence in the past 5,000 years2,3,6,7. If due to a great earthquake, such subsidence should have extended along more than 100 km of the coast2. Here we investigate the extent of coastal subsidence that might have been caused by a single earthquake, through high-precision radiocarbon dating of coastal trees that abruptly subsided into the intertidal zone. The ages leave the great-earthquake hypothesis intact by limiting to a few decades the discordance, if any, in the most recent subsidence of two areas 55 km apart along the Washington coast. This subsidence probably occurred about 300 years ago.

  13. Nonlinear site response in medium magnitude earthquakes near Parkfield, California

    USGS Publications Warehouse

    Rubinstein, Justin L.

    2011-01-01

    Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).

  14. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  15. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  16. Space geodesy and earthquake prediction

    NASA Technical Reports Server (NTRS)

    Bilham, Roger

    1987-01-01

    Earthquake prediction is discussed from the point of view of a new development in geodesy known as space geodesy, which involves the use of extraterrestrial sources or reflectors to measure earth-based distances. Space geodesy is explained, and its relation to terrestrial geodesy is examined. The characteristics of earthquakes are reviewed, and the ways that they can be exploited by space geodesy to predict earthquakes is demonstrated.

  17. Exaggerated Claims About Earthquake Predictions

    NASA Astrophysics Data System (ADS)

    Kafka, Alan L.; Ebel, John E.

    2007-01-01

    The perennial promise of successful earthquake prediction captures the imagination of a public hungry for certainty in an uncertain world. Yet, given the lack of any reliable method of predicting earthquakes [e.g., Geller, 1997; Kagan and Jackson, 1996; Evans, 1997], seismologists regularly have to explain news stories of a supposedly successful earthquake prediction when it is far from clear just how successful that prediction actually was. When journalists and public relations offices report the latest `great discovery' regarding the prediction of earthquakes, seismologists are left with the much less glamorous task of explaining to the public the gap between the claimed success and the sober reality that there is no scientifically proven method of predicting earthquakes.

  18. Earthquake prediction; fact and fallacy

    USGS Publications Warehouse

    Hunter, R.N.

    1976-01-01

    Earthquake prediction is a young and growing area in the field of seismology. Only a few years ago, experts in seismology were declaring flatly that it was impossible. Now, some successes have been achieved and more are expected. Within a few years, earthquakes may be predicted as routinely as the weather, and possibly with greater accuracy. 

  19. Can We Predict Earthquakes?

    SciTech Connect

    Johnson, Paul

    2016-08-31

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  20. Can We Predict Earthquakes?

    ScienceCinema

    Johnson, Paul

    2016-09-09

    The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes — and when.

  1. The earthquake prediction experiment at Parkfield, California

    USGS Publications Warehouse

    Roeloffs, E.; Langbein, J.

    1994-01-01

    Since 1985, a focused earthquake prediction experiment has been in progress along the San Andreas fault near the town of Parkfield in central California. Parkfield has experienced six moderate earthquakes since 1857 at average intervals of 22 years, the most recent a magnitude 6 event in 1966. The probability of another moderate earthquake soon appears high, but studies assigning it a 95% chance of occurring before 1993 now appear to have been oversimplified. The identification of a Parkfield fault "segment" was initially based on geometric features in the surface trace of the San Andreas fault, but more recent microearthquake studies have demonstrated that those features do not extend to seismogenic depths. On the other hand, geodetic measurements are consistent with the existence of a "locked" patch on the fault beneath Parkfield that has presently accumulated a slip deficit equal to the slip in the 1966 earthquake. A magnitude 4.7 earthquake in October 1992 brought the Parkfield experiment to its highest level of alert, with a 72-hour public warning that there was a 37% chance of a magnitude 6 event. However, this warning proved to be a false alarm. Most data collected at Parkfield indicate that strain is accumulating at a constant rate on this part of the San Andreas fault, but some interesting departures from this behavior have been recorded. Here we outline the scientific arguments bearing on when the next Parkfield earthquake is likely to occur and summarize geophysical observations to date.

  2. Predicting Predictable: Accuracy and Reliability of Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2014-12-01

    Earthquake forecast/prediction is an uncertain profession. The famous Gutenberg-Richter relationship limits magnitude range of prediction to about one unit. Otherwise, the statistics of outcomes would be related to the smallest earthquakes and may be misleading when attributed to the largest earthquakes. Moreover, the intrinsic uncertainty of earthquake sizing allows self-deceptive picking of justification "just from below" the targeted magnitude range. This might be important encouraging evidence but, by no means, can be a "helpful" additive to statistics of a rigid testing that determines reliability and efficiency of a farecast/prediction method. Usually, earthquake prediction is classified in respect to expectation time while overlooking term-less identification of earthquake prone areas, as well as spatial accuracy. The forecasts are often made for a "cell" or "seismic region" whose area is not linked to the size of target earthquakes. This might be another source for making a wrong choice in parameterization of an forecast/prediction method and, eventually, for unsatisfactory performance in a real-time application. Summing up, prediction of time and location of an earthquake of a certain magnitude range can be classified into categories listed in the Table below - Classification of earthquake prediction accuracy Temporal, in years Spatial, in source zone size (L) Long-term 10 Long-range Up to 100 Intermediate-term 1 Middle-range 5-10 Short-term 0.01-0.1 Narrow-range 2-3 Immediate 0.001 Exact 1 Note that a wide variety of possible combinations that exist is much larger than usually considered "short-term exact" one. In principle, such an accurate statement about anticipated seismic extreme might be futile due to the complexities of the Earth's lithosphere, its blocks-and-faults structure, and evidently nonlinear dynamics of the seismic process. The observed scaling of source size and preparation zone with earthquake magnitude implies exponential scales for

  3. Does low magnitude earthquake ground shaking cause landslides?

    NASA Astrophysics Data System (ADS)

    Brain, Matthew; Rosser, Nick; Vann Jones, Emma; Tunstall, Neil

    2015-04-01

    Estimating the magnitude of coseismic landslide strain accumulation at both local and regional scales is a key goal in understanding earthquake-triggered landslide distributions and landscape evolution, and in undertaking seismic risk assessment. Research in this field has primarily been carried out using the 'Newmark sliding block method' to model landslide behaviour; downslope movement of the landslide mass occurs when seismic ground accelerations are sufficient to overcome shear resistance at the landslide shear surface. The Newmark method has the advantage of simplicity, requiring only limited information on material strength properties, landslide geometry and coseismic ground motion. However, the underlying conceptual model assumes that shear strength characteristics (friction angle and cohesion) calculated using conventional strain-controlled monotonic shear tests are valid under dynamic conditions, and that values describing shear strength do not change as landslide shear strain accumulates. Recent experimental work has begun to question these assumptions, highlighting, for example, the importance of shear strain rate and changes in shear strength properties following seismic loading. However, such studies typically focus on a single earthquake event that is of sufficient magnitude to cause permanent strain accumulation; by doing so, they do not consider the potential effects that multiple low-magnitude ground shaking events can have on material strength. Since such events are more common in nature relative to high-magnitude shaking events, it is important to constrain their geomorphic effectiveness. Using an experimental laboratory approach, we present results that address this key question. We used a bespoke geotechnical testing apparatus, the Dynamic Back-Pressured Shear Box (DynBPS), that uniquely permits more realistic simulation of earthquake ground-shaking conditions within a hillslope. We tested both cohesive and granular materials, both of which

  4. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  5. Radon in earthquake prediction research.

    PubMed

    Friedmann, H

    2012-04-01

    The observation of anomalies in the radon concentration in soil gas and ground water before earthquakes initiated systematic investigations on earthquake precursor phenomena. The question what is needed for a meaningful earthquake prediction as well as what types of precursory effects can be expected is shortly discussed. The basic ideas of the dilatancy theory are presented which in principle can explain the occurrence of earthquake forerunners. The reasons for radon anomalies in soil gas and in ground water are clarified and a possible classification of radon anomalies is given.

  6. Sociological aspects of earthquake prediction

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Henry Spall talked recently with Denis Mileti who is in the Department of Sociology, Colorado State University, Fort Collins, Colo. Dr. Mileti is a sociologst involved with research programs that study the socioeconomic impact of earthquake prediction

  7. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes

  8. Magnitude and location of historical earthquakes in Japan and implications for the 1855 Ansei Edo earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    2005-01-01

    Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.

  9. Geochemical challenge to earthquake prediction.

    PubMed

    Wakita, H

    1996-04-30

    The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented.

  10. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  11. The Magnitude Distribution of Earthquakes Near Southern California Faults

    DTIC Science & Technology

    2011-12-16

    Lindh , 1985; Jackson and Kagan, 2006]. We do not consider time dependence in this study, but focus instead on the magnitude distribution for this fault...90032-7. Bakun, W. H., and A. G. Lindh (1985), The Parkfield, California, earth- quake prediction experiment, Science, 229(4714), 619–624, doi:10.1126

  12. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  13. Using an extended historical record to assess the temporal behavior of high magnitude earthquakes

    NASA Astrophysics Data System (ADS)

    Bellone, E.; Muir-Wood, R.

    2012-04-01

    Oscillations in the number of worldwide high magnitude earthquakes since 1900 have trigger the question of whether the underlying activity rate can be considered constant. Between 1950 and 1965 there were seven earthquakes of magnitude 8.6 or higher in the space of 15 years followed by a period of 39 years in which there were no earthquakes at or above this size. Including the Mw9.2 2004 Indian Ocean earthquake there have now been four earthquakes at or above this threshold (in seven years) including the 2010 Mw8.8 Maule earthquake in Chile and the Mw9 Tohoku earthquake in Japan. Previous studies, using the earthquake catalogue from 1900 onwards, came to different conclusions on whether these data support a change in the underlying worldwide rate of large magnitude earthquakes. To assist in addressing this issue, we have set out to explore an extended catalogue of extreme magnitude earthquakes spanning at least 300 years. The presentation will report the results of statistical analyses to determine the strength of evidence for temporal clustering of extreme global earthquakes. If we are currently in a period of elevated activity for the largest magnitude earthquakes, what are the implications for assessing subduction zone earthquake risk - as along the Cascadia coastline of Oregon, Washington State and Vancouver Island, or along the coasts of northern Chile and Peru?

  14. Local magnitude determinations for intermountain seismic belt earthquakes from broadband digital data

    USGS Publications Warehouse

    Pechmann, J.C.; Nava, S.J.; Terra, F.M.; Bernier, J.C.

    2007-01-01

    The University of Utah Seismograph Stations (UUSS) earthquake catalogs for the Utah and Yellowstone National Park regions contain two types of size measurements: local magnitude (ML) and coda magnitude (MC), which is calibrated against ML. From 1962 through 1993, UUSS calculated ML values for southern and central Intermountain Seismic Belt earthquakes using maximum peak-to-peak (p-p) amplitudes on paper records from one to five Wood-Anderson (W-A) seismographs in Utah. For ML determinations of earthquakes since 1994, UUSS has utilized synthetic W-A seismograms from U.S. National Seismic Network and UUSS broadband digital telemetry stations in the region, which numbered 23 by the end of our study period on 30 June 2002. This change has greatly increased the percentage of earthquakes for which ML can be determined. It is now possible to determine ML for all M ???3 earthquakes in the Utah and Yellowstone regions and earthquakes as small as M <1 in some areas. To maintain continuity in the magnitudes in the UUSS earthquake catalogs, we determined empirical ML station corrections that minimize differences between MLs calculated from paper and synthetic W-A records. Application of these station corrections, in combination with distance corrections from Richter (1958) which have been in use at UUSS since 1962, produces ML values that do not show any significant distance dependence. ML determinations for the Utah and Yellowstone regions for 1981-2002 using our station corrections and Richter's distance corrections have provided a reliable data set for recalibrating the MC scales for these regions. Our revised ML values are consistent with available moment magnitude determinations for Intermountain Seismic Belt earthquakes. To facilitate automatic ML measurements, we analyzed the distribution of the times of maximum p-p amplitudes in synthetic W-A records. A 30-sec time window for maximum amplitudes, beginning 5 sec before the predicted Sg time, encompasses 95% of the

  15. The politics of earthquake prediction

    SciTech Connect

    Olson, R.S.

    1989-01-01

    This book gives an account of the politics, scientific and public, generated from the Brady-Spence prediction of a massive earthquake to take place within several years in central Peru. Though the disaster did not happen, this examination of the events serves to highlight American scientific processes and the results of scientific interaction with the media and political bureaucracy.

  16. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-10-27

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude.

  17. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  18. Earthquakes clustering based on the magnitude and the depths in Molluca Province

    SciTech Connect

    Wattimanela, H. J.; Pasaribu, U. S.; Indratno, S. W.; Puspito, A. N. T.

    2015-12-22

    In this paper, we present a model to classify the earthquakes occurred in Molluca Province. We use K-Means clustering method to classify the earthquake based on the magnitude and the depth of the earthquake. The result can be used for disaster mitigation and for designing evacuation route in Molluca Province.

  19. Earthquake prediction: Simple methods for complex phenomena

    NASA Astrophysics Data System (ADS)

    Luen, Bradley

    2010-09-01

    Earthquake predictions are often either based on stochastic models, or tested using stochastic models. Tests of predictions often tacitly assume predictions do not depend on past seismicity, which is false. We construct a naive predictor that, following each large earthquake, predicts another large earthquake will occur nearby soon. Because this "automatic alarm" strategy exploits clustering, it succeeds beyond "chance" according to a test that holds the predictions _xed. Some researchers try to remove clustering from earthquake catalogs and model the remaining events. There have been claims that the declustered catalogs are Poisson on the basis of statistical tests we show to be weak. Better tests show that declustered catalogs are not Poisson. In fact, there is evidence that events in declustered catalogs do not have exchangeable times given the locations, a necessary condition for the Poisson. If seismicity followed a stochastic process, an optimal predictor would turn on an alarm when the conditional intensity is high. The Epidemic-Type Aftershock (ETAS) model is a popular point process model that includes clustering. It has many parameters, but is still a simpli_cation of seismicity. Estimating the model is di_cult, and estimated parameters often give a non-stationary model. Even if the model is ETAS, temporal predictions based on the ETAS conditional intensity are not much better than those of magnitude-dependent automatic (MDA) alarms, a much simpler strategy with only one parameter instead of _ve. For a catalog of Southern Californian seismicity, ETAS predictions again o_er only slight improvement over MDA alarms

  20. Microearthquake networks and earthquake prediction

    USGS Publications Warehouse

    Lee, W.H.K.; Steward, S. W.

    1979-01-01

    A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.  

  1. Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock

    NASA Astrophysics Data System (ADS)

    Shcherbakov, R.

    2014-12-01

    Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and

  2. Earthquake predictions using seismic velocity ratios

    USGS Publications Warehouse

    Sherburne, R. W.

    1979-01-01

    Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency. 

  3. Earthquake prediction with electromagnetic phenomena

    SciTech Connect

    Hayakawa, Masashi

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  4. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  5. Dim prospects for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  6. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  7. Recent earthquake prediction research in Japan.

    PubMed

    Mogi, K

    1986-07-18

    Japan has experienced many major earthquake disasters in the past. Early in this century research began that was aimed at predicting the occurrence of earthquakes, and in 1965 an earthquake prediction program was started as a national project. In 1978 a program for constant monitoring and assessment was formally inaugurated with the goal of forecasting the major earthquake that is expected to occur in the near future in the Tokai district of central Honshu Island. The issue of predicting the anticipated Tokai earthquake is discussed in this article as well as the results of research on major recent earthquakes in Japan-the Izu earthquakes (1978 and 1980) and the Japan Sea earthquake (1983).

  8. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  9. Anomalous pre-seismic transmission of VHF-band radio waves resulting from large earthquakes, and its statistical relationship to magnitude of impending earthquakes

    NASA Astrophysics Data System (ADS)

    Moriya, T.; Mogi, T.; Takada, M.

    2010-02-01

    To confirm the relationship between anomalous transmission of VHF-band radio waves and impending earthquakes, we designed a new data-collection system and have documented the anomalous VHF-band radio-wave propagation beyond the line of sight prior to earthquakes since 2002 December in Hokkaido, northern Japan. Anomalous VHF-band radio waves were recorded before two large earthquakes, the Tokachi-oki earthquake (Mj = 8.0, Mj: magnitude defined by the Japan Meteorological Agency) on 2003 September 26 and the southern Rumoi sub-prefecture earthquake (Mj = 6.1) on 2004 December 14. Radio waves transmitted from a given FM radio station are considered to be scattered, such that they could be received by an observation station beyond the line of sight. A linear relationship was established between the logarithm of the total duration time of anomalous transmissions (Te) and the magnitude (M) or maximum seismic intensity (I) of the impending earthquake, for M4-M5 class earthquakes that occurred at depths of 48-54 km beneath the Hidaka Mountains in Hokkaido in 2004 June and 2005 August. Similar linear relationships are also valid for earthquakes that occurred at different depths. The relationship was shifted to longer Te for shallower earthquakes and to shorter Te for deeper ones. Numerous parameters seem to affect Te, including hypocenter depths and surface conditions of epicentral area (i.e. sea or land). This relationship is important because it means that pre-seismic anomalous transmission of VHF-band waves may be useful in predicting the size of an impending earthquake.

  10. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental

  11. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  12. Source time function properties indicate a strain drop independent of earthquake depth and magnitude

    NASA Astrophysics Data System (ADS)

    Vallee, Martin

    2014-05-01

    Movement of the tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other one. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, i.e. the ratio of seismic slip over the dimension of the ruptured fault. SCARDEC, a recently developed method, gives access to this information through the systematic determination of earthquakes source time functions (STFs). STFs describe the integrated spatio-temporal history of the earthquake process, and their maximum value can be related to the amount of stress or strain released during the earthquake. Here I analyse all earthquakes with magnitudes greater than 6 occurring in the last 20 years, and thus provide a catalogue of 1700 STFs which sample all the possible seismic depths. Analysis of this new database reveals that the strain drop remains on average the same for all earthquakes, independent of magnitude and depth. In other words, it is shown that, independent of the earthquake depth, magnitude 6 and larger earthquakes keep on average a similar ratio between seismic slip and dimension of the main slip patch. This invariance implies that deep earthquakes are even more similar than previously thought to their shallow counterparts, a puzzling finding as shallow and deep earthquakes should originate from different physical mechanisms. Concretely, the ratio between slip and patch dimension is on the order of 10-5-10-4, with extreme values only 8 times lower or larger at the 95% confidence interval. Besides the implications for mechanisms of deep earthquake generation, this limited variability has practical implications for realistic earthquake scenarios.

  13. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  14. Soviet prediction of a major earthquake

    USGS Publications Warehouse

    Simpson, D.W.

    1979-01-01

    On November 1, 1978, a magnitude 7 earthquake occurred north of the Pamir Mountains near the Tadjiskistan-Kirghizia border, 150 kilometers east of Garm in Soviet Central Asia. Although the earthquake was felt in Tashkent, Dushanbe, and the Fergana Valley, the epicentral area was uninhabited at that time of year, and no damage was reported. 

  15. The Magnitude 6.7 Northridge, California, Earthquake of January 17, 1994

    NASA Technical Reports Server (NTRS)

    Donnellan, A.

    1994-01-01

    The most damaging earthquake in the United States since 1906 struck northern Los Angeles on January 17.1994. The magnitude 6.7 Northridge earthquake produced a maximum of more than 3 meters of reverse (up-dip) slip on a south-dipping thrust fault rooted under the San Fernando Valley and projecting north under the Santa Susana Mountains.

  16. Incorporating Love- and Rayleigh-Wave Magnitudes, Unequal Earthquake and Explosion Variance Assumptions, and Intrastation Complexity for Improved Event Screening

    DTIC Science & Technology

    2009-09-30

    differences in complexities and magnitude variances for earthquake and explosion - generated surface waves. We have applied the Ms (VMAX) analysis (Bonner et al...for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion - generated surface waves. We...quantifying differences in complexities and magnitude variances for earthquake and explosion - generated surface waves. RESEARCH ACCOMPLISHED Love

  17. The ethics of earthquake prediction.

    PubMed

    Sol, Ayhan; Turan, Halil

    2004-10-01

    Scientists' responsibility to inform the public about their results may conflict with their responsibility not to cause social disturbance by the communication of these results. A study of the well-known Brady-Spence and Iben Browning earthquake predictions illustrates this conflict in the publication of scientifically unwarranted predictions. Furthermore, a public policy that considers public sensitivity caused by such publications as an opportunity to promote public awareness is ethically problematic from (i) a refined consequentialist point of view that any means cannot be justified by any ends, and (ii) a rights view according to which individuals should never be treated as a mere means to ends. The Parkfield experiment, the so-called paradigm case of cooperation between natural and social scientists and the political authorities in hazard management and risk communication, is also open to similar ethical criticism. For the people in the Parkfield area were not informed that the whole experiment was based on a contested seismological paradigm.

  18. Prediction of Future Great Earthquake Locations from Cumulative Stresses Released by Prior Earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, J.; Hong, T. K.

    2014-12-01

    There are 17 great earthquakes with magnitude greater than or equal to 8.5 in the world since 1900. The great events cause significant damages to the humanity. The prediction of potential maximum magnitudes of earthquakes is important for seismic hazard mitigation. In this study, we calculate the Coulomb stress changes around the active plate margins for 507 events with magnitudes greater than 7.0 during 1976-2013 to estimate the cumulative stress releases. We investigate the spatio-temporal variations of ambient stress field from the cumulative Coulomb stress changes as a function of plate motion speed, plate age and dipping angle. It is observed that the largest stress drop occur in relatively high plate velocity in the convergent margins between Nazca and South American plates, between Pacific and North American plates, between Philippine Sea and Eurasian plates, and between Pacific and Australian plates. It is intriguing to note that the great earthquakes such as Tohoku-Oki earthquake and Maule earthquake occur in the highest plate velocity. On the other hand, the largest stress drop occur in the margins with relatively slow plate speeds such as the boundaries between Cocos and North American plates and between Indo-Australian and Eurasian plates. Earthquakes occur dominantly in the regions with positive Coulomb stress changes, suggesting that post-earthquakes are controlled by the stresses released from prior earthquakes. We find strong positive correlations between Coulomb stress changes and plate speeds. The observation suggests that large stress drop was controlled by high plate speed, suggesting possible prediction of potential maximum magnitudes of events.

  19. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  20. Spatiotemporal evolution of the completeness magnitude of the Icelandic earthquake catalogue from 1991 to 2013

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; Mignan, Arnaud; Vogfjörð, Kristin S.

    2016-11-01

    In 1991, a digital seismic monitoring network was installed in Iceland with a digital seismic system and automatic operation. After 20 years of operation, we explore for the first time its nationwide performance by analysing the spatiotemporal variations of the completeness magnitude. We use the Bayesian magnitude of completeness (BMC) method that combines local completeness magnitude observations with prior information based on the density of seismic stations. Additionally, we test the impact of earthquake location uncertainties on the BMC results, by filtering the catalogue using a multivariate analysis that identifies outliers in the hypocentre error distribution. We find that the entire North-to-South active rift zone shows a relatively low magnitude of completeness Mc in the range 0.5-1.0, highlighting the ability of the Icelandic network to detect small earthquakes. This work also demonstrates the influence of earthquake location uncertainties on the spatiotemporal magnitude of completeness analysis.

  1. Maximum earthquake magnitudes along different sections of the North Anatolian fault zone

    NASA Astrophysics Data System (ADS)

    Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda

    2016-04-01

    Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.

  2. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  3. An Updated Catalog of Taiwan Earthquakes (1900-2011) with Homogenized Mw Magnitudes

    NASA Astrophysics Data System (ADS)

    Chen, K.; Tsai, Y.; Chang, W.

    2012-12-01

    A complete and consistent catalog of earthquakes can provide good data for studying the distribution of earthquakes in a region as function of space, time and magnitude. Therefore, it is a basic tool for studying seismic hazard and mitigating hazard, and we can get the seismicity with magnitude equal to or greater than Mw from the data set. In the article for completeness and consistence, we apply a catalog of earthquakes from 1900 to 2006 with homogenized magnitude (Mw) (Chen and Tsai, 2008) as a base, and we also refer to the Hsu (1989) to incorporate available supplementary data (total 188 data) for the period 1900-1935, the supplementary data lead the cutoff threshold magnitude to be from Mw 5.5 down to 5.0, this indicates that we add the additional data has enriched the magnitude > 5.0 content. For this study, the catalog has been updated to include earthquakes up to 2011, and it is complete for Mw > 5.0, this will increase the reliability for studying seismic hazard. It is found that it is saturated for original catalog of Taiwan earthquakes compared with Harvard Mw or USGS M for magnitude > 6.5. Although, we modified the original catalog into seismic moment magnitude Mw, it still does not overcome the drawback. But, it is found for Mw < 6.5, our unified Mw are most greater than Harvard Mw or USGS M, the phenomenon indicates our unified Mw to supplement the gap above magnitude > 6.0 and somewhere magnitude > 5.5 during the time period 1973-1991 for original catalog. Therefore, it is better with Mw to report the earthquake magnitude.

  4. Location and magnitudes of earthquakes in Central Asia from seismic intensity data: model calibration and validation

    NASA Astrophysics Data System (ADS)

    Bindi, Dino; Capera, Augusto A. Gómez; Parolai, Stefano; Abdrakhmatov, Kanatbek; Stucchi, Massimiliano; Zschau, Jochen

    2013-02-01

    In this study, we estimate the location and magnitude of Central Asian earthquake from macroseismic intensity data. A set of 2373 intensity observations from 15 earthquakes is analysed to calibrate non-parametric models for the source and attenuation with distance, the distance being computed from the instrumental epicentres located according to the International Seismological Centre (ISC) catalogue. In a second step, the non-parametric source model is regressed against different magnitude values (e.g. MLH, mb, MS, Mw) as listed in various instrumental catalogues. The reliability of the calibrated model is then assessed by applying the methodology to macroseismic intensity data from 29 validation earthquakes for which both MLH and mb are available from the Central Asian Seismic Risk Initiative (CASRI) project and the ISC catalogue. An overall agreement is found for both the location and magnitude of these events, with the distribution of the differences between instrumental and intensity-based magnitudes having almost a zero mean, and standard deviations equal to 0.30 and 0.44 for mb and MLH, respectively. The largest discrepancies are observed for the location of the 1985, MLH = 7.0 southern Xinjiang earthquake, whose location is outside the area covered by the intensity assignments, and for the magnitude of the 1974, mb = 6.2 Markansu earthquake, which shows a difference in magnitude greater than one unit in terms of MLH. Finally, the relationships calibrated for the non-parametric source model are applied to assign different magnitude-scale values to earthquakes that lack instrumental information. In particular, an intensity-based moment magnitude is assigned to all of the validation earthquakes.

  5. The magnitude 6.7 Northridge, California, earthquake of 17 January 1994

    USGS Publications Warehouse

    Jones, L.; Aki, K.; Boore, D.; Celebi, M.; Donnellan, A.; Hall, J.; Harris, R.; Hauksson, E.; Heaton, T.; Hough, S.; Hudnut, K.; Hutton, K.; Johnston, M.; Joyner, W.; Kanamori, H.; Marshall, G.; Michael, A.; Mori, J.; Murray, M.; Ponti, D.; Reasenberg, P.; Schwartz, D.; Seeber, L.; Shakal, A.; Simpson, R.; Thio, H.; Tinsley, J.; Todorovska, M.; Trifunac, M.; Wald, D.; Zoback, M.L.

    1994-01-01

    The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes.

  6. Earthquake Prediction: Is It Better Not to Know?

    ERIC Educational Resources Information Center

    MOSAIC, 1977

    1977-01-01

    Discusses economic, social and political consequences of earthquake prediction. Reviews impact of prediction on China's recent (February, 1975) earthquake. Diagrams a chain of likely economic consequences from predicting an earthquake. (CS)

  7. Seismic Safety Margins Research Program. Regional relationships among earthquake magnitude scales

    SciTech Connect

    Chung, D. H.; Bernreuter, D. L.

    1980-05-01

    The seismic body-wave magnitude m{sub b} of an earthquake is strongly affected by regional variations in the Q structure, composition, and physical state within the earth. Therefore, because of differences in attenuation of P-waves between the western and eastern United States, a problem arises when comparing m{sub b}'s for the two regions. A regional m/sub b/ magnitude bias exists which, depending on where the earthquake occurs and where the P-waves are recorded, can lead to magnitude errors as large as one-third unit. There is also a significant difference between m{sub b} and M{sub L} values for earthquakes in the western United States. An empirical link between the m{sub b} of an eastern US earthquake and the M{sub L} of an equivalent western earthquake is given by M{sub L} = 0.57 + 0.92(m{sub b}){sub East}. This result is important when comparing ground motion between the two regions and for choosing a set of real western US earthquake records to represent eastern earthquakes. 48 refs., 5 figs., 2 tabs.

  8. The Magnitude Frequency Distribution of Induced Earthquakes and Its Implications for Crustal Heterogeneity and Hazard

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.

    2015-12-01

    Earthquake activity in the central United States has increased dramatically since 2009, principally driven by injection of wastewater coproduced with oil and gas. The elevation of pore pressure from the collective influence of many disposal wells has created an unintended experiment that probes both the state of stress and architecture of the fluid plumbing and fault systems through the earthquakes it induces. These earthquakes primarily release tectonic stress rather than accommodation stresses from injection. Results to date suggest that the aggregated magnitude-frequency distribution (MFD) of these earthquakes differs from natural tectonic earthquakes in the same region for which the b-value is ~1.0. In Kansas, Oklahoma and Texas alone, more than 1100 earthquakes Mw ≥3 occurred between January 2014 and June 2015 but only 32 were Mw ≥ 4 and none were as large as Mw 5. Why is this so? Either the b-value is high (> 1.5) or the magnitude-frequency distribution (MFD) deviates from log-linear form at large magnitude. Where catalogs from local networks are available, such as in southern Kansas, b-values are normal (~1.0) for small magnitude events (M < 3). The deficit in larger-magnitude events could be an artifact of a short observation period, or could reflect a decreased potential for large earthquakes. According to the prevailing paradigm, injection will induce an earthquake when (1) the pressure change encounters a preexisting fault favorably oriented in the tectonic stress field; and (2) the pore-pressure perturbation at the hypocenter is sufficient to overcome the frictional strength of the fault. Most induced earthquakes occur where the injection pressure has attenuated to a small fraction of the seismic stress drop implying that the nucleation point was highly stressed. The population statistics of faults satisfying (1) could be the cause of this MFD if there are many small faults (dimension < 1 km) and few large ones in a critically stressed crust

  9. Research in earthquake prediction - the Parkfield prediction experiment

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    The 15-mile-long Parkfield, California, section of the Sam Andreas fault is the best understood earthquake source region in the world. Moderate-sized earthquakes of local magnitude 5 3/4 occurred at Parkfield in 1881, 1901, 1922, 1934, and 1966.

  10. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  11. Model parameter estimation bias induced by earthquake magnitude cut-off

    NASA Astrophysics Data System (ADS)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  12. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  13. Tectonic summaries of magnitude 7 and greater earthquakes from 2000 to 2015

    USGS Publications Warehouse

    Hayes, Gavin P.; Meyers, Emma K.; Dewey, James W.; Briggs, Richard W.; Earle, Paul S.; Benz, Harley M.; Smoczyk, Gregory M.; Flamme, Hanna E.; Barnhart, William D.; Gold, Ryan D.; Furlong, Kevin P.

    2017-01-11

    This paper describes the tectonic summaries for all magnitude 7 and larger earthquakes in the period 2000–2015, as produced by the U.S. Geological Survey National Earthquake Information Center during their routine response operations to global earthquakes. The goal of such summaries is to provide important event-specific information to the public rapidly and concisely, such that recent earthquakes can be understood within a global and regional seismotectonic framework. We compile these summaries here to provide a long-term archive for this information, and so that the variability in tectonic setting and earthquake history from region to region, and sometimes within a given region, can be more clearly understood.

  14. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  15. How to assess magnitudes of paleo-earthquakes from multiple observations

    NASA Astrophysics Data System (ADS)

    Hintersberger, Esther; Decker, Kurt

    2016-04-01

    An important aspect of fault characterisation regarding seismic hazard assessment are paleo-earthquake magnitudes. Especially in regions with low or moderate seismicity, paleo-magnitudes are normally much larger than those of historical earthquakes and therefore provide essential information about seismic potential and expected maximum magnitudes of a certain region. In general, these paleo-earthquake magnitudes are based either on surface rupture length or on surface displacement observed at trenching sites. Several well-established correlations provide the possibility to link the observed surface displacement to a certain magnitude. However, the combination of more than one observation is still rare and not well established. We present here a method based on a probabilistic approach proposed by Biasi and Weldon (2006) to combine several observations to better constrain the possible magnitude range of a paleo-earthquake. Extrapolating the approach of Biasi and Weldon (2006), the single-observation probability density functions (PDF) are assumed to be independent of each other. Following this line, the common PDF for all observed surface displacements generated by one earthquake is the product of all single-displacement PDFs. In order to test our method, we use surface displacement data for modern earthquakes, where magnitudes have been determined by instrumental records. For randomly selected "observations", we calculated the associated PDFs for each "observation point". We then combined the PDFs into one common PDF for an increasing number of "observations". Plotting the most probable magnitudes against the number of combined "observations", the resultant range of most probable magnitudes is very close to the magnitude derived by instrumental methods. Testing our method with real trenching observations, we used the results of a paleoseismological investigation within the Vienna Pull-Apart Basin (Austria), where three trenches were opened along the normal

  16. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  17. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not

  18. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  19. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)

    NASA Astrophysics Data System (ADS)

    Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-06-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding

  20. Intensity, magnitude, location and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, Walter; Hough, Susan; Martin, Stacey; Bilham, Roger

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earthquakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental- with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earthquakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  1. Intensity, magnitude, location, and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, W.; Hough, S.; Martin, S.; Bilham, R.

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earth-quakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental-with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earth-quakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  2. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  3. Earthquake potential and magnitude limits inferred from a geodetic strain-rate model for southern Europe

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Bird, P.; Jackson, D. D.

    2016-04-01

    The project Seismic Hazard Harmonization in Europe (SHARE), completed in 2013, presents significant improvements over previous regional seismic hazard modeling efforts. The Global Strain Rate Map v2.1, sponsored by the Global Earthquake Model Foundation and built on a large set of self-consistent geodetic GPS velocities, was released in 2014. To check the SHARE seismic source models that were based mainly on historical earthquakes and active fault data, we first evaluate the SHARE historical earthquake catalogues and demonstrate that the earthquake magnitudes are acceptable. Then, we construct an earthquake potential model using the Global Strain Rate Map data. SHARE models provided parameters from which magnitude-frequency distributions can be specified for each of 437 seismic source zones covering most of Europe. Because we are interested in proposed magnitude limits, and the original zones had insufficient data for accurate estimates, we combine zones into five groups according to SHARE's estimates of maximum magnitude. Using the strain rates, we calculate tectonic moment rates for each group. Next, we infer seismicity rates from the tectonic moment rates and compare them with historical and SHARE seismicity rates. For two of the groups, the tectonic moment rates are higher than the seismic moment rates of the SHARE models. Consequently, the rates of large earthquakes forecast by the SHARE models are lower than those inferred from tectonic moment rate. In fact, the SHARE models forecast higher seismicity rates than the historical rates, which indicate that the authors of SHARE were aware of the potentially higher seismic activities in the zones. For one group, the tectonic moment rate is lower than the seismic moment rates forecast by the SHARE models. As a result, the rates of large earthquakes in that group forecast by the SHARE model are higher than those inferred from tectonic moment rate, but lower than what the historical data show. For the other two

  4. Foreshocks Are Not Predictive of Future Earthquake Size

    NASA Astrophysics Data System (ADS)

    Page, M. T.; Felzer, K. R.; Michael, A. J.

    2014-12-01

    The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. A number of predictions can be made from the cascade model, including the fraction of earthquakes that are foreshocks to larger events, the distribution of differences between foreshock and mainshock magnitudes, and the distribution of time lags between foreshocks and mainshocks. The last should follow the inverse Omori law, which will cause the appearance of an accelerating seismicity rate if multiple foreshock sequences are stacked (Helmstetter and Sornette, 2003). All of these predictions are consistent with observations (Helmstetter and Sornette, 2003; Felzer et al. 2004). If foreshocks were to scale with mainshock size, this would be strong evidence against the cascade model. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results. In particular, we find that their catalogs are not complete to the level assumed, and their ETAS model underestimates inverse Omori behavior. To conclude, seismicity data to date is consistent with the hypothesis that the nucleation process is the same for earthquakes of all sizes.

  5. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake

    USGS Publications Warehouse

    Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.

    1993-01-01

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).

  6. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  7. Reconstructing the magnitude for Earth's greatest earthquakes with microfossil measures of sudden coastal subsidence

    NASA Astrophysics Data System (ADS)

    Engelhart, S. E.; Horton, B. P.; Nelson, A. R.; Wang, K.; Wang, P.; Witter, R. C.; Hawkes, A.

    2012-12-01

    Tidal marsh sediments in estuaries along the Cascadia coast archive stratigraphic evidence of Holocene great earthquakes (magnitude 8-9) that record abrupt relative sea-level (RSL) changes. Quantitative microfossil-based RSL reconstructions produce precise estimates of sudden coastal subsidence or uplift during great earthquakes because of the strong relationship between species distributions and elevation within the intertidal zone. We have developed a regional foraminiferal-based transfer function that is validated against simulated coseismic subsidence from a marsh transplant experiment, demonstrating accuracy to within 5 cm. Two case studies demonstrate the utility of high-precision microfossil-based RSL reconstructions at the Cascadia subduction zone. One approach in early Cascadia paleoseismic research was to describe the stratigraphic evidence of the great AD 1700 earthquake and then assume that earlier earthquakes were of similar magnitude. All but the most recent (transfer function) estimates of the amount of coseismic subsidence at Cascadia are too imprecise (errors of >±0.5 m) to distinguish, for example, coseismic from postseismic land-level movements, or to infer differences in amounts of subsidence or uplift from one earthquake cycle to the next. Reconstructions of RSL rise from stratigraphic records at multiple locations for the four most recent earthquake cycles show variability in the amount of coseismic subsidence. The penultimate earthquake at Siletz Bay around 800 to 900 years ago produced one-third of the coseismic subsidence produced in AD 1700. Most earthquake rupture models used a uniform-slip distribution along the megathrust to explain poorly constrained paleoseismic estimates of coastal subsidence during the AD 1700 Cascadia earthquake. Here, we test models of heterogeneous slip for the AD 1700 Cascadia earthquake that are similar to slip distribution inferred for instrumentally recorded great subduction earthquakes worldwide. We use

  8. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed Central

    Tullis, T E

    1996-01-01

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks. Images Fig. 4 Fig. 4 Fig. 5 Fig. 7 PMID:11607668

  9. Rock friction and its implications for earthquake prediction examined via models of Parkfield earthquakes.

    PubMed

    Tullis, T E

    1996-04-30

    The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks.

  10. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  11. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  12. Statistical relations among earthquake magnitude, surface rupture length, and surface fault displacement

    USGS Publications Warehouse

    Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.

    1984-01-01

    In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement

  13. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  14. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  15. The role of the Federal government in the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Filson, J.R.

    1988-01-01

    Earthquake prediction research in the United States us carried out under the aegis of the National Earthquake Hazards Reduction Act of 1977. One of the objectives of the act is "the implementation in all areas of high or moderate seismic risk, of a system (including personnel and procedures) for predicting damaging earthquakes and for identifying, evaluating, and accurately characterizing seismic hazards." Among the four Federal agencies working under the 1977 act, the U.S Geological Survey (USGS) is responsible for earthquake prediction research and technological implementation. The USGS has adopted a goal that is stated quite simply; predict the time, place, and magnitude of damaging earthquakes. The Parkfield earthquake prediction experiment represents the msot concentrated and visible effor to date to test progress toward this goal. 

  16. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  17. Analytical Conditions for Compact Earthquake Prediction Approaches

    NASA Astrophysics Data System (ADS)

    Sengor, T.

    2009-04-01

    This paper concerns itself with The atmosphere and ionosphere include non-uniform electric charge and current distributions during the earthquake activity. These charges and currents move irregularly when an activity is scheduled for an earthquake at the future. The electromagnetic characteristics of the region over the earth change to domains where irregular transportations of non-uniform electric charges are observed; therefore, the electromagnetism in the plasma, which moves irregularly and contains non-uniform charge distributions, is studied. These cases of charge distributions are called irregular and non-uniform plasmas. It is called the seismo-plasma if irregular and non-uniform plasma defines a real earthquake activity, which will come to truth. Some signals involving the above-mentioned coupling effects generate some analytical conditions giving the predictability of seismic processes [1]-[5]. These conditions will be discussed in this paper. 2 References [1] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes," IUGG Perugia 2007. [2] T. Sengor, "The electromagnetic device optimization modeling of seismo-electromagnetic processes for Marmara Sea earthquakes," EGU 2008. [3] T. Sengor, "On the exact interaction mechanism of electromagnetically generated phenomena with significant earthquakes and the observations related the exact predictions before the significant earthquakes at July 1999-May 2000 period," Helsinki Univ. Tech. Electrom. Lab. Rept. 368, May 2001. [4] T. Sengor, "The Observational Findings Before The Great Earthquakes Of December 2004 And The Mechanism Extraction From Associated Electromagnetic Phenomena," Book of XXVIIIth URSI GA 2005, pp. 191, EGH.9 (01443) and Proceedings 2005 CD, New Delhi, India, Oct. 23-29, 2005. [5] T. Sengor, "The interaction mechanism among electromagnetic phenomena and geophysical-seismic-ionospheric phenomena with extraction for exact earthquake prediction genetics," 10

  18. Numerical simulation of the Kamaishi repeating earthquake sequence: Change in magnitude due to the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Yoshida, Shingo; Kato, Naoyuki; Fukuda, Jun'ichi

    2015-05-01

    We conducted numerical simulations of a repeating earthquake sequence on the plate boundary off the shore of Kamaishi, northeastern Japan, assuming rate- and state-dependent friction laws. Uchida et al. 2014 reported that the Kamaishi repeating earthquakes showed an increase in magnitude and a decrease in recurrence interval after the 2011 Tohoku-oki earthquake (M9), but an approximately constant magnitude (M ~ 4.9) and a regular recurrence interval (~ 5.5 years) before the Tohoku-oki earthquake. A M5.9 event occurred just after the M9 event, and was followed by a M5.5 event. We considered a fault patch of velocity-weakening friction, with frictional parameters leading to seismic slip confined in its central part of the patch. Afterslip due to the M9 event was involved in the model to increase the loading rate on the patch. The simulation successfully reproduced increasing magnitude and decreasing recurrence time caused by the afterslip. A M6 class event, in which seismic slip spread over the entire area of the patch, occurred just after the M9 event for the aging law and the Nagata law. When we assumed the aging law with frictional parameters which lie near the boundary between leading to slip on the entire patch and slip confined in its central part, the M6 class event was followed by a M5.5 class event. Furthermore we examined a conditionally stable large patch that contained a small unstable patch. This model also reproduced a M6 class event after the M9 event. In these models, stress outside the confined area of the path is released before a dynamic event under a constant low loading rate, whereas the stress perturbation due to afterslip within the seismic cycle induces a dynamic event on the entire patch.

  19. Intermediate- and long-term earthquake prediction.

    PubMed Central

    Sykes, L R

    1996-01-01

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study. Images Fig. 1 PMID:11607658

  20. Intermediate- and long-term earthquake prediction.

    PubMed

    Sykes, L R

    1996-04-30

    Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study.

  1. Seismomagnetic observation during the 8 July 1986 magnitude 5.9 North Palm Springs earthquake

    USGS Publications Warehouse

    Johnston, M.J.S.; Mueller, R.J.

    1987-01-01

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observations are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.

  2. Seismomagnetic observation during the 8 july 1986 magnitude 5.9 north palm springs earthquake.

    PubMed

    Johnston, M J; Mueller, R J

    1987-09-04

    A differentially connected array of 24 proton magnetometers has operated along the San Andreas fault since 1976. Seismomagnetic offsets of 1.2 and 0.3 nanotesla were observed at epicentral distances of 3 and 9 kilometers, respectively, after the 8 July 1986 magnitude 5.9 North Palm Springs earthquake. These seismomagnetic observation are the first obtained of this elusive but long-anticipated effect. The data are consistent with a seismomagnetic model of the earthquake for which right-lateral rupture of 20 centimeters is assumed on a 16-kilometer segment of the Banning fault between the depths of 3 and 10 kilometers in a region with average magnetization of 1 ampere per meter. Alternative explanations in terms of electrokinetic effects and earthquake-generated electrostatic charge redistribution seem unlikely because the changes are permanent and complete within a 20-minute period.

  3. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  4. HYPOELLIPSE; a computer program for determining local earthquake hypocentral parameters, magnitude, and first-motion pattern

    USGS Publications Warehouse

    Lahr, John C.

    1999-01-01

    This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.

  5. Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area

    NASA Astrophysics Data System (ADS)

    Alarifi, A. S.; Alarifi, N. S.

    2009-12-01

    Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis

  6. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  7. Prospects for earthquake prediction and control

    USGS Publications Warehouse

    Healy, J.H.; Lee, W.H.K.; Pakiser, L.C.; Raleigh, C.B.; Wood, M.D.

    1972-01-01

    The San Andreas fault is viewed, according to the concepts of seafloor spreading and plate tectonics, as a transform fault that separates the Pacific and North American plates and along which relative movements of 2 to 6 cm/year have been taking place. The resulting strain can be released by creep, by earthquakes of moderate size, or (as near San Francisco and Los Angeles) by great earthquakes. Microearthquakes, as mapped by a dense seismograph network in central California, generally coincide with zones of the San Andreas fault system that are creeping. Microearthquakes are few and scattered in zones where elastic energy is being stored. Changes in the rate of strain, as recorded by tiltmeter arrays, have been observed before several earthquakes of about magnitude 4. Changes in fluid pressure may control timing of seismic activity and make it possible to control natural earthquakes by controlling variations in fluid pressure in fault zones. An experiment in earthquake control is underway at the Rangely oil field in Colorado, where the rates of fluid injection and withdrawal in experimental wells are being controlled. ?? 1972.

  8. A moment-tensor catalog for intermediate magnitude earthquakes in Mexico

    NASA Astrophysics Data System (ADS)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo

    2016-04-01

    Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism

  9. Reported geomagnetic and ionospheric precursors to earthquakes: Summary, reanalysis, and implications for short-term prediction

    NASA Astrophysics Data System (ADS)

    Thomas, J. N.; Masci, F.; Love, J. J.; Johnston, M. J.

    2012-12-01

    Earthquakes are one of the most devastating natural phenomena on earth, causing high deaths tolls and large financial losses each year. If precursory signals could be regularly and reliably identified, then the hazardous effects of earthquakes might be mitigated. Unfortunately, it is not at all clear that short-term earthquake prediction is either possible or practical, and the entire subject remains controversial. Still, many claims of successful earthquake precursor observations have been published, and among these are reports of geomagnetic and ionospheric anomalies prior to earthquake occurrence. Given the importance of earthquake prediction, reports of earthquake precursors need to be analyzed and checked for reliability and reproducibility. We have done this for numerous such reports, including the Loma Prieta, Guam, Hector Mine, Tohoku, and L'Aquila earthquakes. We have found that these reported earthquake precursors: 1) often lack time series observations from long before and long after the earthquakes and near and far from the earthquakes, 2) are not statistically correlated with the earthquakes and do not relate to the earthquake source mechanisms, 3) are not followed by similar, but much larger, signals during the subsequent earthquake when the primary energy release occurs, 4) are nonuniform in that they occur at different spatial and temporal regimes relative to the earthquakes and with different magnitudes and frequencies, and 5) can often be explained by other non-earthquake related mechanisms or normal geomagnetic activity. Thus we conclude that these reported precursors could not be used to predict the time or location of the earthquakes. Based on our findings, we suggest a protocol for examining precursory reports, something that will help guide future research in this area.

  10. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    USGS Publications Warehouse

    Barton, D.J.; Foulger, G.R.; Henderson, J.R.; Julian, B.R.

    1999-01-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  11. Artificial neural network model for earthquake prediction with radon monitoring.

    PubMed

    Külahci, Fatih; Inceöz, Murat; Doğru, Mahmut; Aksoy, Ercan; Baykara, Oktay

    2009-01-01

    Apart from the linear monitoring studies concerning the relationship between radon and earthquake, an artificial neural networks (ANNs) model approach is presented starting out from non-linear changes of the eight different parameters during the earthquake occurrence. A three-layer Levenberg-Marquardt feedforward learning algorithm is used to model the earthquake prediction process in the East Anatolian Fault System (EAFS). The proposed ANN system employs individual training strategy with fixed-weight and supervised models leading to estimations. The average relative error between the magnitudes of the earthquakes acquired by ANN and measured data is about 2.3%. The relative error between the test and earthquake data varies between 0% and 12%. In addition, the factor analysis was applied on all data and the model output values to see the statistical variation. The total variance of 80.18% was explained with four factors by this analysis. Consequently, it can be concluded that ANN approach is a potential alternative to other models with complex mathematical operations.

  12. Intermediate-term prediction in advance of the Loma Prieta earthquake

    SciTech Connect

    Keilis-Borok, V.I.; Kossobokov, V.; Rotvain, I. ); Knopoff, L. )

    1990-08-01

    The Loma Prieta earthquake of October 17, 1989 was predicted by the use of two pattern recognition algorithms, CN and M8. The prediction with algorithm CN was that an earthquake with magnitude greater than or equal to 6.4 was expected to occur in a roughly four year interval staring in midsummer 1986 in a polygonal spatial window of approximate average dimensions 600 {times} 450 km, encompassing Northern California and Northern Nevada. The prediction with algorithm M8 was that an earthquake with magnitude greater than or equal to 7.0 was expected to occur within 5 to 7 years after 1985, in a spatial window of approximate average dimensions 800 {times} 560 km. The predictions were communicated in advance of the earthquake. In previous, mainly retrospective applications of these algorithms, successful predictions occurred in about 80% of the cases.

  13. Physically based prediction of earthquake induced landsliding

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Meunier, Patrick; Hovius, Niels; Gorum, Tolga; Uchida, Taro

    2015-04-01

    Earthquakes are an important trigger of landslides and can contribute significantly to sedimentary or organic matter fluxes. We present a new physically based expression for the prediction of total area and volume of populations of earthquake-induced landslides. This model implements essential seismic processes, linking key parameters such as ground acceleration, fault size, earthquake source depth and seismic moment. To assess the model we have compiled and normalized a database of landslide inventories for 40 earthquakes. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about two thirds of the cases in our databases. This is a significant improvement on a previously published empirical expression based only on earthquake moment, even though the prediction of total landslide area is more difficult than that of volume because it is affected by additional parameters such as the depth and continuity of soil cover. Some outliers in terms of observed landslide intensity are likely to be associated with exceptional rock mass properties in the epicentral area. Others may be related to seismic source complexities ignored by the model. However, most cases in our catalogue seem to be relatively unaffected by these two effects despite the variety of lithologies and tectonic settings they cover. This makes the model suitable for integration into landscape evolution models, and application to the assessment of secondary hazards and risks associated with earthquakes.

  14. Adjusting the M8 algorithm to earthquake prediction in the Iranian plateau

    NASA Astrophysics Data System (ADS)

    Mojarab, Masoud; Memarian, Hossein; Zare, Mehdi; Kossobokov, Vladimir

    2017-03-01

    Earthquake prediction is one of the challenging problems of seismology. The present study intended to setup a routine prediction of major earthquakes in the Iranian plateau using a modification of the intermediate-term middle-range algorithm M8, in which original version has demonstrated high performance in a real-time Global Test over the last two decades. An investigation of earthquake catalog covering the entire the Iranian plateau through 2012 has shown that a modification of the M8 algorithm, adjusted for a rather low level of earthquake occurrence reported in the region, is capable for targeting magnitude 7.5+ events. The occurrence of the April 16, 2013, M7.7 Saravan and the September 24, 2013, M7.7 Awaran earthquakes at the time of writing this paper (14 months before Saravan earthquake occurrence) confirmed the results of investigation and demonstrated the need for further studies in this region. Earlier tests, M8 application in all over the Iran, showed that the 2013 Saravan and Awaran earthquakes may precede a great earthquake with magnitude 8+ in Makran region. To verify this statement, the algorithm M8 was applied once again on an updated catalog to September 2013. The result indicated that although the study region recently experienced two magnitude 7.5+ earthquakes, it remains prone to a major earthquake. The present study confirms the applicability of M8 algorithm for predicting earthquakes in the Iranian plateau and establishes an opportunity for a routine monitoring of seismic activity aimed at prediction of the largest earthquakes that can play a significant role in mitigation of damages due to natural hazard.

  15. An application of earthquake prediction algorithm M8 in eastern Anatolia at the approach of the 2011 Van earthquake

    NASA Astrophysics Data System (ADS)

    Mojarab, Masoud; Kossobokov, Vladimir; Memarian, Hossein; Zare, Mehdi

    2015-07-01

    On 23rd October 2011, an M7.3 earthquake near the Turkish city of Van, killed more than 600 people, injured over 4000, and left about 60,000 homeless. It demolished hundreds of buildings and caused great damages to thousand others in Van, Ercis, Muradiye, and Çaldıran. The earthquake's epicenter is located about 70 km from a preceding M7.3 earthquake that occurred in November 1976 and destroyed several villages near the Turkey-Iran border and killed thousands of people. This study, by means of retrospective application of the M8 algorithm, checks to see if the 2011 Van earthquake could have been predicted. The algorithm is based on pattern recognition of Times of Increased Probability (TIP) of a target earthquake from the transient seismic sequence at lower magnitude ranges in a Circle of Investigation (CI). Specifically, we applied a modified M8 algorithm adjusted to a rather low level of earthquake detection in the region following three different approaches to determine seismic transients. In the first approach, CI centers are distributed on intersections of morphostructural lineaments recognized as prone to magnitude 7 + earthquakes. In the second approach, centers of CIs are distributed on local extremes of the seismic density distribution, and in the third approach, CI centers were distributed uniformly on the nodes of a 1∘×1∘ grid. According to the results of the M8 algorithm application, the 2011 Van earthquake could have been predicted in any of the three approaches. We noted that it is possible to consider the intersection of TIPs instead of their union to improve the certainty of the prediction results. Our study confirms the applicability of a modified version of the M8 algorithm for predicting earthquakes at the Iranian-Turkish plateau, as well as for mitigation of damages in seismic events in which pattern recognition algorithms may play an important role.

  16. Stress drop in the sources of intermediate-magnitude earthquakes in northern Tien Shan

    NASA Astrophysics Data System (ADS)

    Sycheva, N. A.; Bogomolov, L. M.

    2014-05-01

    The paper is devoted to estimating the dynamical parameters of 14 earthquakes with intermediate magnitudes (energy class 11 to 14), which occurred in the Northern Tien Shan. For obtaining the estimates of these parameters, including the stress drop, which could be then applied in crustal stress reconstruction by the technique suggested by Yu.L. Rebetsky (Schmidt Institute of Physics of the Earth, Russian Academy of Sciences), we have improved the algorithms and programs for calculating the spectra of the seismograms. The updated products allow for the site responses and spectral transformations during the propagation of seismic waves through the medium (the effect of finite Q-factor). By applying the new approach to the analysis of seismograms recorded by the seismic KNET network, we calculated the radii of the sources (Brune radius), scalar seismic moment, and stress drop (release) for the studied 14 earthquakes. The analysis revealed a scatter in the source radii and stress drop even among the earthquakes that have almost identical energy classes. The stress drop by different earthquakes ranges from one to 75 bar. We have also determined the focal mechanisms and stress regime of the Earth's crust. It is worth noting that during the considered period, strong seismic events with energy class above 14 were absent within the segment covered by the KNET stations.

  17. Strong nonlinear dependence of the spectral amplification factors of deep Vrancea earthquakes magnitude

    NASA Astrophysics Data System (ADS)

    Marmureanu, Gheorghe; Ortanza Cioflan, Carmen; Marmureanu, Alexandru

    2010-05-01

    Nonlinear effects in ground motion during large earthquakes have long been a controversial issue between seismologists and geotechnical engineers. Aki wrote in 1993:"Nonlinear amplification at sediments sites appears to be more pervasive than seismologists used to think…Any attempt at seismic zonation must take into account the local site condition and this nonlinear amplification( Local site effects on weak and strong ground motion, Tectonophysics,218,93-111). In other words, the seismological detection of the nonlinear site effects requires a simultaneous understanding of the effects of earthquake source, propagation path and local geological site conditions. The difficulty for seismologists in demonstrating the nonlinear site effects has been due to the effect being overshadowed by the overall patterns of shock generation and path propagation. The researchers from National Institute for Earth Physics ,in order to make quantitative evidence of large nonlinear effects, introduced the spectral amplification factor (SAF) as ratio between maximum spectral absolute acceleration (Sa), relative velocity (Sv) , relative displacement (Sd) from response spectra for a fraction of critical damping at fundamental period and peak values of acceleration(a-max),velocity (v-max) and displacement (d-max),respectively, from processed strong motion record and pointed out that there is a strong nonlinear dependence on earthquake magnitude and site conditions.The spectral amplification factors(SAF) are finally computed for absolute accelerations at 5% fraction of critical damping (β=5%) in five seismic stations: Bucharest-INCERC(soft soils, quaternary layers with a total thickness of 800 m);Bucharest-Magurele (dense sand and loess on 350m); Cernavoda Nuclear Power Plant site (marl, loess, limestone on 270 m) Bacau(gravel and loess on 20m) and Iassy (loess, sand, clay, gravel on 60 m) for last strong and deep Vrancea earthquakes: March 4,1977 (MGR =7.2 and h=95 km);August 30

  18. A radon detector for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Dacey, James

    2010-04-01

    Recent events in Haiti and Chile remind us of the devastation that can be wrought by an earthquake, especially when it strikes without warning. For centuries, people living in seismically active regions have reported a number of strange occurrences immediately prior to a quake, including unexpected weather phenomena and even unusual behaviour among animals. In more recent times, some scientists have suggested other precursors, such as sporadic bursts of electromagnetic radiation from the fault zone. Unfortunately, none of these suggestions has led to a robust, scientific method for earthquake prediction. Now, however, a group of physicists, led by physics Nobel laureate Georges Charpak, has developed a new detector that could measure one of the more testable earthquake precursors - the suggestion that radon gas is released from fault zones prior to earth slipping, writes James Dacey.

  19. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  20. Earthquake Magnitude: A Teaching Module for the Spreadsheets Across the Curriculum Initiative

    NASA Astrophysics Data System (ADS)

    Wetzel, L. R.; Vacher, H. L.

    2006-12-01

    Spreadsheets Across the Curriculum (SSAC) is a library of computer-based activities designed to reinforce or teach quantitative-literacy or mathematics concepts and skills in context. Each activity (called a "module" in the SSAC project) consists of a PowerPoint presentation with embedded Excel spreadsheets. Each module focuses on one or more problems for students to solve. Each student works through a presentation, thinks about the in-context problem, figures out how to solve it mathematically, and builds the spreadsheets to calculate and examine answers. The emphasis is on mathematical problem solving. The intention is for the in- context problems to span the entire range of subjects where quantitative thinking, number sense, and math non-anxiety are relevant. The self-contained modules aim to teach quantitative concepts and skills in a wide variety of disciplines (e.g., health care, finance, biology, and geology). For example, in the Earthquake Magnitude module students create spreadsheets and graphs to explore earthquake magnitude scales, wave amplitude, and energy release. In particular, students realize that earthquake magnitude scales are logarithmic. Because each step in magnitude represents a 10-fold increase in wave amplitude and approximately a 30-fold increase in energy release, large earthquakes are much more powerful than small earthquakes. The module has been used as laboratory and take-home exercises in small structural geology and solid earth geophysics courses with upper level undergraduates. Anonymous pre- and post-tests assessed students' familiarity with Excel as well as other quantitative skills. The SSAC library consists of 27 modules created by a community of educators who met for one-week "module-making workshops" in Olympia, Washington, in July of 2005 and 2006. The educators designed the modules at the workshops both to use in their own classrooms and to make available for others to adopt and adapt at other locations and in other classes

  1. 76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 1-day meeting on April 16, 2011. The meeting... the Director of the U.S. Geological Survey on proposed earthquake predictions, on the completeness...

  2. Detecting precursory patterns to enhance earthquake prediction in Chile

    NASA Astrophysics Data System (ADS)

    Florido, E.; Martínez-Álvarez, F.; Morales-Esteban, A.; Reyes, J.; Aznarte-Mellado, J. L.

    2015-03-01

    The prediction of earthquakes is a task of utmost difficulty that has been widely addressed by using many different strategies, with no particular good results thus far. Seismic time series of the four most active Chilean zones, the country with largest seismic activity, are analyzed in this study in order to discover precursory patterns for large earthquakes. First, raw data are transformed by removing aftershocks and foreshocks, since the goal is to only predict main shocks. New attributes, based on the well-known b-value, are also generated. Later, these data are labeled, and consequently discretized, by the application of a clustering algorithm, following the suggestions found in recent literature. Earthquakes with magnitude larger than 4.4 are identified in the time series. Finally, the sequence of labels acting as precursory patterns for such earthquakes are searched for within the datasets. Results verging on 70% on average are reported, leading to conclude that the methodology proposed is suitable to be applied in other zones with similar seismicity.

  3. Signals of ENPEMF Used in Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Hao, G.; Dong, H.; Zeng, Z.; Wu, G.; Zabrodin, S. M.

    2012-12-01

    The signals of Earth's natural pulse electromagnetic field (ENPEMF) is a combination of the abnormal crustal magnetic field pulse affected by the earthquake, the induced field of earth's endogenous magnetic field, the induced magnetic field of the exogenous variation magnetic field, geomagnetic pulsation disturbance and other energy coupling process between sun and earth. As an instantaneous disturbance of the variation field of natural geomagnetism, ENPEMF can be used to predict earthquakes. This theory was introduced by A.A Vorobyov, who expressed a hypothesis that pulses can arise not only in the atmosphere but within the Earth's crust due to processes of tectonic-to-electric energy conversion (Vorobyov, 1970; Vorobyov, 1979). The global field time scale of ENPEMF signals has specific stability. Although the wave curves may not overlap completely at different regions, the smoothed diurnal ENPEMF patterns always exhibit the same trend per month. The feature is a good reference for observing the abnormalities of the Earth's natural magnetic field in a specific region. The frequencies of the ENPEMF signals generally locate in kilo Hz range, where frequencies within 5-25 kilo Hz range can be applied to monitor earthquakes. In Wuhan, the best observation frequency is 14.5 kilo Hz. Two special devices are placed in accordance with the S-N and W-E direction. Dramatic variation from the comparison between the pulses waveform obtained from the instruments and the normal reference envelope diagram should indicate high possibility of earthquake. The proposed detection method of earthquake based on ENPEMF can improve the geodynamic monitoring effect and can enrich earthquake prediction methods. We suggest the prospective further researches are about on the exact sources composition of ENPEMF signals, the distinction between noise and useful signals, and the effect of the Earth's gravity tide and solid tidal wave. This method may also provide a promising application in

  4. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  5. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  6. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of

  7. Is It Possible to Predict Strong Earthquakes?

    NASA Astrophysics Data System (ADS)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  8. On the earthquake predictability of fault interaction models

    PubMed Central

    Marzocchi, W; Melini, D

    2014-01-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process. PMID:26074643

  9. On the earthquake predictability of fault interaction models

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Melini, D.

    2014-12-01

    Space-time clustering is the most striking departure of large earthquakes occurrence process from randomness. These clusters are usually described ex-post by a physics-based model in which earthquakes are triggered by Coulomb stress changes induced by other surrounding earthquakes. Notwithstanding the popularity of this kind of modeling, its ex-ante skill in terms of earthquake predictability gain is still unknown. Here we show that even in synthetic systems that are rooted on the physics of fault interaction using the Coulomb stress changes, such a kind of modeling often does not increase significantly earthquake predictability. Earthquake predictability of a fault may increase only when the Coulomb stress change induced by a nearby earthquake is much larger than the stress changes caused by earthquakes on other faults and by the intrinsic variability of the earthquake occurrence process.

  10. A local earthquake coda magnitude and its relation to duration, moment M sub O, and local Richter magnitude M sub L

    NASA Technical Reports Server (NTRS)

    Suteau, A. M.; Whitcomb, J. H.

    1977-01-01

    A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.

  11. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-02-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  12. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey, Interior. ACTION: Notice of meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... proposed earthquake predictions, on the completeness and scientific validity of the available data...

  13. 77 FR 53225 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: Department of the... National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\ day meeting on September 17 and 18, 2012, at the U.S. Geological Survey National Earthquake Information Center (NEIC),...

  14. Local geodetic and seismic energy balance for shallow earthquake prediction

    NASA Astrophysics Data System (ADS)

    Cannavó, Flavio; Arena, Alessandra; Monaco, Carmelo

    2015-01-01

    Earthquake analysis for prediction purposes is a delicate and still open problem largely debated among scientists. In this work, we want to show that a successful time-predictable model is possible if based on large instrumental data from dense monitoring networks. To this aim, we propose a new simple data-driven and quantitative methodology which takes into account the accumulated geodetic strain and the seismically-released strain to calculate a balance of energies. The proposed index quantifies the state of energy of the selected area and allows us to evaluate better the ingoing potential seismic risk, giving a new tool to read recurrence of small-scale and shallow earthquakes. In spite of its intrinsic simple formulation, the application of the methodology has been successfully simulated in the Eastern flank of Mt. Etna (Italy) by tuning it in the period 2007-2011 and testing it in the period 2012-2013, allowing us to predict, within days, the earthquakes with highest magnitude.

  15. The 2009 earthquake, magnitude mb 4.8, in the Pantanal Wetlands, west-central Brazil.

    PubMed

    Dias, Fábio L; Assumpção, Marcelo; Facincani, Edna M; França, George S; Assine, Mario L; Paranhos, Antônio C; Gamarra, Roberto M

    2016-09-01

    The main goal of this paper is to characterize the Coxim earthquake occurred in June 15th, 2009 in the Pantanal Basin and to discuss the relationship between its faulting mechanism with the Transbrasiliano Lineament. The earthquake had maximum intensity MM V causing damage in farm houses and was felt in several cities located around, including Campo Grande and Goiânia. The event had an mb 4.8 magnitude and depth was 6 km, i.e., it occurred in the upper crust, within the basement and 5 km below the Cenozoic sedimentary cover. The mechanism, a thrust fault mechanism with lateral motion, was obtained by P-wave first-motion polarities and confirmed by regional waveform modelling. The two nodal planes have orientations (strike/dip) of 300°/55° and 180°/55° and the orientation of the P-axis is approximately NE-SW. The results are similar to the Pantanal earthquake of 1964 with mb 5.4 and NE-SW compressional axis. Both events show that Pantanal Basin is a seismically active area, under compressional stress. The focal mechanism of the 1964 and 2009 events have no nodal plane that could be directly associated with the main SW-NE trending Transbrasiliano system indicating that a direct link of the Transbrasiliano with the seismicity in the Pantanal Basin is improbable.

  16. Collective properties of injection-induced earthquake sequences: 2. Spatiotemporal evolution and magnitude frequency distributions

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Suckale, Jenny; Huang, Yihe

    2016-05-01

    Probabilistic seismic hazard assessment for induced seismicity depends on reliable estimates of the locations, rate, and magnitude frequency properties of earthquake sequences. The purpose of this paper is to investigate how variations in these properties emerge from interactions between an evolving fluid pressure distribution and the mechanics of rupture on heterogeneous faults. We use an earthquake sequence model, developed in the first part of this two-part series, that computes pore pressure evolution, hypocenter locations, and rupture lengths for earthquakes triggered on 1-D faults with spatially correlated shear stress. We first consider characteristic features that emerge from a range of generic injection scenarios and then focus on the 2010-2011 sequence of earthquakes linked to wastewater disposal into two wells near the towns of Guy and Greenbrier, Arkansas. Simulations indicate that one reason for an increase of the Gutenberg-Richter b value for induced earthquakes is the different rates of reduction of static and residual strength as fluid pressure rises. This promotes fault rupture at lower stress than equivalent tectonic events. Further, b value is shown to decrease with time (the induced seismicity analog of b value reduction toward the end of the seismic cycle) and to be higher on faults with lower initial shear stress. This suggests that faults in the same stress field that have different orientations, and therefore different levels of resolved shear stress, should exhibit seismicity with different b-values. A deficit of large-magnitude events is noted when injection occurs directly onto a fault and this is shown to depend on the geometry of the pressure plume. Finally, we develop models of the Guy-Greenbrier sequence that captures approximately the onset, rise and fall, and southwest migration of seismicity on the Guy-Greenbrier fault. Constrained by the migration rate, we estimate the permeability of a 10 m thick critically stressed basement

  17. The local magnitude of the 18 October 1989 Santa Cruz Mountains earthquake is M sub L =6. 9

    SciTech Connect

    McNally, K.C.; Yellin, J.; Protti-Quesada, M.; Malavassi, E.; Schillinger, W.; Terdiman, R.; Zhang, Z. ); Simila, G. )

    1990-09-01

    It is critical that local magnitudes, M{sub L} (Richter, 1935), be carefully determined for large earthquakes. M{sub L} is the calibration standard for many catalogs of historic earthquakes upon which other magnitude scales and measures of strong ground shaking are based. Also, M{sub L} is measured in the period range of 1-10 Hz, the most relevant for engineering and emergency response applications. The earthquake catalogs constitute the basis for both pure and applied research on statistical properties of earthquakes and earthquake processes. Despite the fact that they are most important in terms of energy release only a few large earthquakes are contained in the catalogs, however, because they are relatively rare. The authors find that the local magnitude, M{sub L}, of the 18 October 1989 (U.T.) earthquake is 6.9, not 7.0-7.1 as has been reported. This value agrees with the moment magnitude, M{sub w}=6.9, found by Kanamori and Satake (1990).

  18. Spatial variations in the frequency-magnitude distribution of earthquakes at Mount Pinatubo volcano

    USGS Publications Warehouse

    Sanchez, J.J.; McNutt, S.R.; Power, J.A.; Wyss, M.

    2004-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is mapped in two and three dimensions at Mount Pinatubo, Philippines, to a depth of 14 km below the summit. We analyzed 1406 well-located earthquakes with magnitudes MD ???0.73, recorded from late June through August 1991, using the maximum likelihood method. We found that b-values are higher than normal (b = 1.0) and range between b = 1.0 and b = 1.8. The computed b-values are lower in the areas adjacent to and west-southwest of the vent, whereas two prominent regions of anomalously high b-values (b ??? 1.7) are resolved, one located 2 km northeast of the vent between 0 and 4 km depth and a second located 5 km southeast of the vent below 8 km depth. The statistical differences between selected regions of low and high b-values are established at the 99% confidence level. The high b-value anomalies are spatially well correlated with low-velocity anomalies derived from earlier P-wave travel-time tomography studies. Our dataset was not suitable for analyzing changes in b-values as a function of time. We infer that the high b-value anomalies around Mount Pinatubo are regions of increased crack density, and/or high pore pressure, related to the presence of nearby magma bodies.

  19. Improving the Level of Seismic Hazard Parameters in Saudi Arabia Using Earthquake Location and Magnitude Calibration

    NASA Astrophysics Data System (ADS)

    Al-Amri, A. M.; Rodgers, A. J.

    2004-05-01

    Saudi Arabia is an area, which is characterized very poorly seismically and for which little existing data is available. While for the most parts, particularly, Arabian Shield and Arabian Platform are aseismic, the area is ringed with regional seismic sources in the tectonically active areas of Iran and Turkey to the northeast, the Red Sea Rift bordering the Shield to the southwest, and the Dead Sea Transform fault zone to the north. Therefore, this paper aims to improve the level of seismic hazard parameters by improving earthquake location and magnitude estimates with the Saudi Arabian National Digital Seismic Network (SANDSN). We analyzed earthquake data, travel times and seismic waveform data from the SANDSN. KACST operates the 38 station SANDSN, consisting of 27 broadband and 11 short-period stations. The SANDSN has good signal detection capabilities because the sites are relatively quiet. Noise surveys at a few stations indicate that seismic noise levels at SANDSN stations are quite low for frequencies between 0.1 and 1.0 Hz, however cultural noise appears to affect some stations at frequencies above 1.0 Hz. Locations of regional earthquakes estimated by KACST were compared with locations from global bulletins. Large differences between KACST and global catalog locations are likely the result of inadequacies of the global average earth model (iasp91) used by the KACST system. While this model is probably adequate for locating distant (teleseismic) events in continental regions, it leads to large location errors, as much as 50-100 km, for regional events. We present detailed analysis of some events and Dead Sea explosions where we found gross errors in estimated locations. Velocity models are presented that should improve estimated locations of regional events in three specific regions: 1. Gulf of Aqabah - Dead Sea region 2. Arabian Shield and 3. Arabian Platform. Recently, these models are applied to the SANDSN to improve local and teleseismic event locations

  20. A Comprehensive Mathematical Model for the Correlation of Earthquake Magnitude with Geochemical Measurements. A Case Study: the Nisyros Volcano in Greece

    SciTech Connect

    Verros, G. D.; Latsos, T.; Liolios, C.; Anagnostou, K. E.

    2009-08-13

    A comprehensive mathematical model for the correlation of geological phenomena such as earthquake magnitude with geochemical measurements is presented in this work. This model is validated against measurements, well established in the literature, of {sup 220}Rn/{sup 222}Rn in the fumarolic gases of the Nisyros Island, Aegean Sea, Greece. It is believed that this model may be further used to develop a generalized methodology for the prediction of geological phenomena such as earthquakes and volcanic eruptions in the vicinity of the Nisyros Island.

  1. Magnitudes and Moment-Duration Scaling of Low-Frequency Earthquakes Beneath Southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A.; Rubin, A. M.; Savard, G.; Chuang, L. Y.

    2015-12-01

    We employ 130 low-frequency-earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from 100's to 1000's of individual LFEs, representing over 300,000 independent detections from major episodic-tremor-and- slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P- and S-waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatio-temporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free-surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single ETS template. The spatio-temporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 hours of LFE activity during an ETS episode when tidal sensitity is low. The remainder is released in bursts over several days, particularly as spatially extensive RTRs, during which tidal sensitivity is high. RTR's are characterized by large magnitude LFEs, and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power-law than exponential distributions although they exhibit very high b-values ≥ 6. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges MW<1.5, MW≥ 2.0. LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in dimension and that moment variation is dominated by slip. This behaviour implies

  2. Maximum Magnitude and Probabilities of Induced Earthquakes in California Geothermal Fields: Applications for a Science-Based Decision Framework

    NASA Astrophysics Data System (ADS)

    Weiser, Deborah Anne

    Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.

  3. Predicting Ground Motion from Induced Earthquakes in Geothermal Areas

    NASA Astrophysics Data System (ADS)

    Douglas, J.; Edwards, B.; Convertito, V.; Sharma, N.; Tramelli, A.; Kraaijpoel, D.; Cabrera, B. M.; Maercklin, N.; Troise, C.

    2013-06-01

    Induced seismicity from anthropogenic sources can be a significant nuisance to a local population and in extreme cases lead to damage to vulnerable structures. One type of induced seismicity of particular recent concern, which, in some cases, can limit development of a potentially important clean energy source, is that associated with geothermal power production. A key requirement for the accurate assessment of seismic hazard (and risk) is a ground-motion prediction equation (GMPE) that predicts the level of earthquake shaking (in terms of, for example, peak ground acceleration) of an earthquake of a certain magnitude at a particular distance. Few such models currently exist in regard to geothermal-related seismicity, and consequently the evaluation of seismic hazard in the vicinity of geothermal power plants is associated with high uncertainty. Various ground-motion datasets of induced and natural seismicity (from Basel, Geysers, Hengill, Roswinkel, Soultz, and Voerendaal) were compiled and processed, and moment magnitudes for all events were recomputed homogeneously. These data are used to show that ground motions from induced and natural earthquakes cannot be statistically distinguished. Empirical GMPEs are derived from these data; and, although they have similar characteristics to recent GMPEs for natural and mining-related seismicity, the standard deviations are higher. To account for epistemic uncertainties, stochastic models subsequently are developed based on a single corner frequency and with parameters constrained by the available data. Predicted ground motions from these models are fitted with functional forms to obtain easy-to-use GMPEs. These are associated with standard deviations derived from the empirical data to characterize aleatory variability. As an example, we demonstrate the potential use of these models using data from Campi Flegrei.

  4. Estimating Seismic Hazards from the Catalog of Taiwan Earthquakes from 1900 to 2014 in Terms of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Chen, Kuei-Pao; Chang, Wen-Yen

    2017-02-01

    Maximum expected earthquake magnitude is an important parameter when designing mitigation measures for seismic hazards. This study calculated the maximum magnitude of potential earthquakes for each cell in a 0.1° × 0.1° grid of Taiwan. Two zones vulnerable to maximum magnitudes of M w ≥6.0, which will cause extensive building damage, were identified: one extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan; the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values, which are consistent with high peak ground shaking. We also employed an innovative method to calculate (at intervals of M w 0.5) the bounds and median of recurrence time for earthquakes of magnitude M w 6.0-8.0 in Taiwan.

  5. tau_p^{max} magnitude estimation, the case of the April 6, 2009 L'Aquila earthquake

    NASA Astrophysics Data System (ADS)

    Olivieri, Marco

    2013-04-01

    Rapid magnitude estimate procedures represent a crucial part of proposed earthquake early warning systems. Most of these estimates are focused on the first part of the P-wave train, the earlier and less destructive part of the ground motion that follows an earthquake. Allen and Kanamori (Science 300:786-789, 2003) proposed to use the predominant period of the P-wave to determine the magnitude of a large earthquake at local distance and Olivieri et al. (Bull Seismol Soc Am 185:74-81, 2008) calibrated a specific relation for the Italian region. The Mw 6.3 earthquake hit Central Italy on April 6, 2009 and the largest aftershocks provide a useful dataset to validate the proposed relation and discuss the risks connected to the extrapolation of magnitude relations with a poor dataset of large earthquake waveforms. A large discrepancy between local magnitude (ML) estimated by means of tau_p^{max} evaluation and standard ML (6.8 ± 1.5 vs. 5.9 ± 0.4) suggests using caution when ML vs. tau_p^{max} calibrations do not include a relevant dataset of large earthquakes. Effects from large residuals could be mitigated or removed introducing selection rules on τ p function, by regionalizing the ML vs. tau_p^{max} function in the presence of significant tectonic or geological heterogeneity, and using probabilistic and evolutionary methods.

  6. Persistency of rupture directivity in moderate-magnitude earthquakes in Italy: Implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Rovelli, A.; Calderoni, G.

    2012-12-01

    A simple method based on the EGF deconvolution in the frequency domain is applied to detect the occurrence of unilateral ruptures in recent damaging earthquakes in Italy. The spectral ratio between event pairs with different magnitudes at individual stations shows large azimuthal variations above corner frequency when the target event is affected by source directivity and the EGF is not or vice versa. The analysis is applied to seismograms and accelerograms recorded during the seismic sequence following the 20 May 2012, Mw 5.6 main shock in Emilia, northern Italy, the 6 April 2009, Mw 6.1 earthquake of L'Aquila, central Italy, and the 26 September 1997, Mw 5.7 and 6.0 shocks in Umbria-Marche, central Italy. Events of each seismic sequence are selected as having consistent focal mechanisms, and the station selection obeys to the constraint of a similar source-to-receiver path for the event pairs. The analyzed data set of L'Aquila consists of 962 broad-band seismograms relative to 69 normal-faulting earthquakes (3.3 ≤ MW ≤ 6.1, according to Herrmann et al., 2011), stations are selected in the distance range 100 to 250 km to minimize differences in propagation paths. The seismogram analysis reveals that a strong along-strike (toward SE) source directivity characterized all of the three Mw > 5.0 shocks. Source directivity was also persistent up to the smallest magnitudes: 65% of earthquakes under study showed evidence of directivity toward SE whereas only one (Mw 3.7) event showed directivity in the opposite direction. Also the Mw 5.6 main shock of the 20 May 2012 in Emilia result in large azimuthal spectral variations indicating unilateral rupture propagation toward SE. According to the reconstructed geometry of the trust-fault plane, the inferred directivity direction suggests top-down rupture propagation. The analysis over the Emilia aftershock sequence is in progress. The third seismic sequence, dated 1997-1998, occurred in the northern Apennines and, similarly

  7. An evaluation of the seismic- window theory for earthquake prediction.

    USGS Publications Warehouse

    McNutt, M.; Heaton, T.H.

    1981-01-01

    Reports studies designed to determine whether earthquakes in the San Francisco Bay area respond to a fortnightly fluctuation in tidal amplitude. It does not appear that the tide is capable of triggering earthquakes, and in particular the seismic window theory fails as a relevant method of earthquake prediction. -J.Clayton

  8. Prediction of Earthquakes by Lunar Cicles

    NASA Astrophysics Data System (ADS)

    Rodriguez, G.

    2007-05-01

    Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme

  9. Spatial variations in the frequency-magnitude distribution of earthquakes in the southwestern Okinawa Trough

    NASA Astrophysics Data System (ADS)

    Lin, J.-Y.; Sibuet, J.-C.; Lee, C.-S.; Hsu, S.-K.; Klingelhoefer, F.

    2007-04-01

    The relations between the frequency of occurrence and the magnitude of earthquakes are established in the southern Okinawa Trough for 2823 relocated earthquakes recorded during a passive ocean bottom seismometer experiment. Three high b-values areas are identified: (1) for an area offshore of the Ilan Plain, south of the andesitic Kueishantao Island from a depth of 50 km to the surface, thereby confirming the subduction component of the island andesites; (2) for a body lying along the 123.3°E meridian at depths ranging from 0 to 50 km that may reflect the high temperature inflow rising up from a slab tear; (3) for a third cylindrical body about 15 km in diameter beneath the Cross Backarc Volcanic Trail, at depths ranging from 0 to 15 km. This anomaly might be related to the presence of a magma chamber at the base of the crust already evidenced by tomographic and geochemical results. The high b-values are generally linked to magmatic and geothermal activities, although most of the seismicity is linked to normal faulting processes in the southern Okinawa Trough.

  10. Reexamination of the magnitudes for the 1906 and 1922 Chilean earthquakes using Japanese tsunami amplitudes: Implications for source depth constraints

    NASA Astrophysics Data System (ADS)

    Carvajal, M.; Cisternas, M.; Gubler, A.; Catalán, P. A.; Winckler, P.; Wesson, R. L.

    2017-01-01

    Far-field tsunami records from the Japanese tide gauge network allow the reexamination of the moment magnitudes (Mw) for the 1906 and 1922 Chilean earthquakes, which to date rely on limited information mainly from seismological observations alone. Tide gauges along the Japanese coast provide extensive records of tsunamis triggered by six great (Mw >8) Chilean earthquakes with instrumentally determined moment magnitudes. These tsunami records are used to explore the dependence of tsunami amplitudes in Japan on the parent earthquake magnitude of Chilean origin. Using the resulting regression parameters together with tide gauge amplitudes measured in Japan we estimate apparent moment magnitudes of Mw 8.0-8.2 and Mw 8.5-8.6 for the 1906 central and 1922 north-central Chile earthquakes. The large discrepancy of the 1906 magnitude estimated from the tsunami observed in Japan as compared with those previously determined from seismic waves (Ms 8.4) suggests a deeper than average source with reduced tsunami excitation. A deep dislocation along the Chilean megathrust would favor uplift of the coast rather than beneath the sea, giving rise to a smaller tsunami and producing effects consistent with those observed in 1906. The 1922 magnitude inferred from far-field tsunami amplitudes appear to better explain the large extent of damage and the destructive tsunami that were locally observed following the earthquake than the lower seismic magnitudes (Ms 8.3) that were likely affected by the well-known saturation effects. Thus, a repeat of the large 1922 earthquake poses seismic and tsunami hazards in a region identified as a mature seismic gap.

  11. Material contrast does not predict earthquake rupture propagation direction

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    2005-01-01

    Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

  12. Moment magnitude, local magnitude and corner frequency of small earthquakes nucleating along a low angle normal fault in the Upper Tiber valley (Italy)

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Chiaraluce, L.; Valoroso, L.

    2015-12-01

    The relation between moment magnitude (MW) and local magnitude (ML) is still a debated issue (Bath, 1966, 1981; Ristau et al., 2003, 2005). Theoretical considerations and empirical observations show that, in the magnitude range between 3 and 5, MW and ML scale 1∶1. Whilst for smaller magnitudes this 1∶1 scaling breaks down (Bethmann et al. 2011). For accomplishing this task we analyzed the source parameters of about 1500 (30.000 waveforms) well-located small earthquakes occurred in the Upper Tiber Valley (Northern Apennines) in the range of -1.5≤ML≤3.8. In between these earthquakes there are 300 events repeatedly rupturing the same fault patch generally twice within a short time interval (less than 24 hours; Chiaraluce et al., 2007). We use high-resolution short period and broadband recordings acquired between 2010 and 2014 by 50 permanent seismic stations deployed to monitor the activity of a regional low angle normal fault (named Alto Tiberina fault, ATF) in the framework of The Alto Tiberina Near Fault Observatory project (TABOO; Chiaraluce et al., 2014). For this study the direct determination of MW for small earthquakes is essential but unfortunately the computation of MW for small earthquakes (MW < 3) is not a routine procedure in seismology. We apply the contributions of source, site, and crustal attenuation computed for this area in order to obtain precise spectral corrections to be used in the calculation of small earthquakes spectral plateaus. The aim of this analysis is to achieve moment magnitudes of small events through a procedure that uses our previously calibrated crustal attenuation parameters (geometrical spreading g(r), quality factor Q(f), and the residual parameter k) to correct for path effects. We determine the MW-ML relationships in two selected fault zones (on-fault and fault-hanging-wall) of the ATF by an orthogonal regression analysis providing a semi-automatic and robust procedure for moment magnitude determination within a

  13. Field survey of earthquake effects from the magnitude 4.0 southern Maine earthquake of October 16, 2012

    USGS Publications Warehouse

    Amy L. Radakovich,; Alex J. Fergusen,; Boatwright, John

    2016-06-02

    The magnitude 4.0 earthquake that occurred on October 16, 2012, near Hollis Center and Waterboro in southwestern Maine surprised and startled local residents but caused only minor damage. A two-person U.S. Geological Survey (USGS) team was sent to Maine to conduct an intensity survey and document the damage. The only damage we observed was the failure of a chimney and plaster cracks in two buildings in East and North Waterboro, 6 kilometers (km) west of the epicenter. We photographed the damage and interviewed residents to determine the intensity distribution in the epicentral area. The damage and shaking reports are consistent with a maximum Modified Mercalli Intensity (MMI) of 5–6 for an area 1–8 km west of the epicenter, slightly higher than the maximum Community Decimal Intensity (CDI) of 5 determined by the USGS “Did You Feel It?” Web site. The area of strong shaking in East Waterboro corresponds to updip rupture on a fault plane that dips steeply east. 

  14. Current affairs in earthquake prediction in Japan

    NASA Astrophysics Data System (ADS)

    Uyeda, Seiya

    2015-12-01

    As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.

  15. A regional surface wave magnitude scale for the earthquakes of Russia's Far East

    NASA Astrophysics Data System (ADS)

    Chubarova, O. S.; Gusev, A. A.

    2017-01-01

    The modified scale M s(20R) is developed for the magnitude classification of the earthquakes of Russia's Far East based on the surface wave amplitudes at regional distances. It extends the applicability of the classical Gutenberg scale M s(20) towards small epicentral distances (0.7°-20°). The magnitude is determined from the amplitude of the signal that is preliminarily bandpassed to extract the components with periods close to 20 s. The amplitude is measured either for the surface waves or, at fairly short distances of 0.7°-3°, for the inseparable wave group of the surface and shear waves. The main difference of the M s(20R) scale with the traditional M s(BB) Soloviev-Vanek scale is its firm spectral anchoring. This approach practically eliminated the problem of the significant (up to-0.5) regional and station anomalies characteristic of the M s(BB) scale in the conditions of the Far East. The absence of significant station and regional anomalies, as well as the strict spectral anchoring, make the M s(20R) scale advantageous when used for prompt decision making in tsunami warnings for the coasts of Russia's Far East.

  16. Magnitude Uncertainty and Ground Motion Simulations of the 1811-1812 New Madrid Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Graves, R. W.; Olsen, K. B.; Boyd, O. S.; Hartzell, S.; Ni, S.; Somerville, P. G.; Williams, R. A.; Zhong, J.

    2011-12-01

    We present a study of a set of three-dimensional earthquake simulation scenarios in the New Madrid Seismic Zone (NMSZ). This is a collaboration among three simulation groups with different numerical modeling approaches and computational capabilities. The study area covers a portion of the Central United States (~400,000 km2) centered on the New Madrid seismic zone, which includes several metropolitan areas such as Memphis, TN and St. Louis, MO. We computed synthetic seismograms to a frequency of 1Hz by using a regional 3D velocity model (Ramirez-Guzman et al., 2010), two different kinematic source generation approaches (Graves et al., 2010; Liu et al., 2006) and one methodology where sources were generated using dynamic rupture simulations (Olsen et al., 2009). The set of 21 hypothetical earthquakes included different magnitudes (Mw 7, 7.6 and 7.7) and epicenters for two faults associated with the seismicity trends in the NMSZ: the Axial (Cottonwood Grove) and the Reelfoot faults. Broad band synthetic seismograms were generated by combining high frequency synthetics computed in a one-dimensional velocity model with the low frequency motions at a crossover frequency of 1 Hz. Our analysis indicates that about 3 to 6 million people living near the fault ruptures would experience Mercalli intensities from VI to VIII if events similar to those of the early nineteenth century occurred today. In addition, the analysis demonstrates the importance of 3D geologic structures, such as the Reelfoot Rift and the Mississippi Embayment, which can channel and focus the radiated wave energy, and rupture directivity effects, which can strongly amplify motions in the forward direction of the ruptures. Both of these effects have a significant impact on the pattern and level of the simulated intensities, which suggests an increased uncertainty in the magnitude estimates of the 1811-1812 sequence based only on historic intensity reports. We conclude that additional constraints such as

  17. Research on earthquake prediction from infrared cloud images

    NASA Astrophysics Data System (ADS)

    Fan, Jing; Chen, Zhong; Yan, Liang; Gong, Jing; Wang, Dong

    2015-12-01

    In recent years, the occurrence of large earthquakes is frequent all over the word. In the face of the inevitable natural disasters, the prediction of the earthquake is particularly important to avoid more loss of life and property. Many achievements in the field of predict earthquake from remote sensing images have been obtained in the last few decades. But the traditional prediction methods presented do have the limitations of can't forecast epicenter location accurately and automatically. In order to solve the problem, a new predicting earthquakes method based on extract the texture and emergence frequency of the earthquake cloud is proposed in this paper. First, strengthen the infrared cloud images. Second, extract the texture feature vector of each pixel. Then, classified those pixels and converted to several small suspected area. Finally, tracking the suspected area and estimate the possible location. The inversion experiment of Ludian earthquake show that this approach can forecast the seismic center feasible and accurately.

  18. Global correlations between maximum magnitudes of subduction zone interface thrust earthquakes and physical parameters of subduction zones

    NASA Astrophysics Data System (ADS)

    Schellart, W. P.; Rawlinson, N.

    2013-12-01

    The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g. Mariana, Scotia). Here we show how such variability might depend on various subduction zone parameters. We present 24 physical parameters that characterize these subduction zones in terms of their geometry, kinematics, geology and dynamics. We have investigated correlations between these parameters and the maximum recorded moment magnitude (MW) for subduction zone segments in the period 1900-June 2012. The investigations were done for one dataset using a geological subduction zone segmentation (44 segments) and for two datasets (rupture zone dataset and epicenter dataset) using a 200 km segmentation (241 segments). All linear correlations for the rupture zone dataset and the epicenter dataset (|R| = 0.00-0.30) and for the geological dataset (|R| = 0.02-0.51) are negligible-low, indicating that even for the highest correlation the best-fit regression line can only explain 26% of the variance. A comparative investigation of the observed ranges of the physical parameters for subduction segments with MW > 8.5 and the observed ranges for all subduction segments gives more useful insight into the spatial distribution of giant subduction thrust earthquakes. For segments with MW > 8.5 distinct (narrow) ranges are observed for several parameters, most notably the trench-normal overriding plate deformation rate (vOPD⊥, i.e. the relative velocity between forearc and stable far-field backarc), trench-normal absolute trench rollback velocity (vT⊥), subduction partitioning ratio (vSP⊥/vS⊥, the fraction of the subduction velocity that is accommodated by subducting plate motion), subduction thrust dip angle (δST), subduction thrust curvature (CST), and trench curvature angle (

  19. A Study of Low-Frequency Earthquake Magnitudes in Northern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Chuang, L. Y.; Bostock, M. G.

    2015-12-01

    Tectonic tremor and low frequency earthquakes (LFE) have been extensively studied in recent years in northern Washington and southern Vancouver Island (VI). However, far less attention has been directed to northern VI where the behavior of tremor and LFEs is less well documented. We investigate LFE properties in this latter region by assembling templates using data from the POLARIS-NVI and Sea-JADE experiments. The POLARIS-NVI experiment comprised 27 broadband seismometers arranged along two mutually perpendicular arms with an aperture of ~60 km centered near station WOS (lat. 50.16, lon. -126.57). It recorded two ETS events in June 2006 and May 2007, each with duration less than a week. For these two episodes, we constructed 68 independent, high signal to noise ratio LFE templates representing spatially distinct asperities on the plate boundary in NVI, along with a catalogue of more than 30 thousand detections. A second data set is being prepared for the complementary 2014 Sea-JADE data set. The precisely located LFE templates represent simple direct P-waves and S-waves at many stations thereby enabling magnitude estimation of individual detections. After correcting for radiation pattern, 1-D geometrical spreading, attenuation and free-surface magnification, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single LFE template. LFE magnitudes range up to 2.54, and like southern VI are characterized by high b-values (b~8). In addition, we will quantify LFE moment-duration scaling and compare with southern Vancouver Island where LFE moments appear to be controlled by slip, largely independent of fault area.

  20. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  1. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  2. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  3. Seismic response of the katmai volcanoes to the 6 December 1999 magnitude 7.0 Karluk Lake earthquake, Alaska

    USGS Publications Warehouse

    Power, J.A.; Moran, S.C.; McNutt, S.R.; Stihler, S.D.; Sanchez, J.J.

    2001-01-01

    A sudden increase in earthquake activity was observed beneath volcanoes in the Katmai area on the Alaska Peninsula immediately following the 6 December 1999 magnitude (Mw) 7.0 Karluk Lake earthquake beneath southern Kodiak Island, Alaska. The observed increase in earthquake activity consisted of small (ML < 1.3), shallow (Z < 5.0 km) events. These earthquakes were located beneath Mount Martin, Mount Mageik, Trident Volcano, and the Katmai caldera and began within the coda of the Karluk Lake mainshock. All of these earthquakes occurred in areas and magnitude ranges that are typical for the background seismicity observed in the Katmai area. Seismicity rates returned to background levels 8 to 13 hours after the Karluk Lake mainshock. The close temporal relationship with the Karluk Lake mainshock, the onset of activity within the mainshock coda, and the simultaneous increase beneath four separate volcanic centers all suggest these earthquakes were remotely triggered. Modeling of the Coulomb stress changes from the mainshock for optimally oriented faults suggests negligible change in static stress beneath the Katmai volcanoes. This result favors models that involve dynamic stresses as the mechanism for triggered seismicity at Katmai.

  4. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    SciTech Connect

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-06-20

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  5. The magnitude of events following a strong earthquake: and a pattern recognition approach applied to Italian seismicity

    NASA Astrophysics Data System (ADS)

    Gentili, Stefania; Di Giovambattista, Rita

    2016-04-01

    In this study, we propose an analysis of the earthquake clusters occurred in Italy from 1980 to 2015. In particular, given a strong earthquake, we are interested to identify statistical clues to forecast whether a subsequent strong earthquake will follow. We apply a pattern recognition approach to verify the possible precursors of a following strong earthquake. Part of the analysis is based on the observation of the cluster during the first hours/days after the first large event. The features adopted are, among the others, the number of earthquakes, the radiated energy and the equivalent source area. The other part of the analysis is based on the characteristics of the first strong earthquake, like its magnitude, depth, focal mechanism, the tectonic position of the source zone. The location of the cluster inside the Italia territory is of particular interest. In order to characterize the precursors depending on the cluster type, we used decision trees as classifiers on single precursor separately. The performances of the classification are tested by leave-one-out method. The analysis is done using different time-spans after the first strong earthquake, in order to simulate the increase of information available as time passes during the seismic clusters. The performances are assessed in terms of precision, recall and goodness of the single classifiers and the ROC graph is shown.

  6. Relationship between isoseismal area and magnitude of historical earthquakes in Greece by a hybrid fuzzy neural network method

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-A.; Sokos, E.

    2012-01-01

    In this paper we suggest the use of diffusion-neural-networks, (neural networks with intrinsic fuzzy logic abilities) to assess the relationship between isoseismal area and earthquake magnitude for the region of Greece. It is of particular importance to study historical earthquakes for which we often have macroseismic information in the form of isoseisms but it is statistically incomplete to assess magnitudes from an isoseismal area or to train conventional artificial neural networks for magnitude estimation. Fuzzy relationships are developed and used to train a feed forward neural network with a back propagation algorithm to obtain the final relationships. Seismic intensity data from 24 earthquakes in Greece have been used. Special attention is being paid to the incompleteness and contradictory patterns in scanty historical earthquake records. The results show that the proposed processing model is very effective, better than applying classical artificial neural networks since the magnitude macroseismic intensity target function has a strong nonlinearity and in most cases the macroseismic datasets are very small.

  7. Upper-plate controls on co-seismic slip in the 2011 magnitude 9.0 Tohoku-oki earthquake.

    PubMed

    Bassett, Dan; Sandwell, David T; Fialko, Yuri; Watts, Anthony B

    2016-03-03

    The March 2011 Tohoku-oki earthquake was only the second giant (moment magnitude Mw ≥ 9.0) earthquake to occur in the last 50 years and is the most recent to be recorded using modern geophysical techniques. Available data place high-resolution constraints on the kinematics of earthquake rupture, which have challenged prior knowledge about how much a fault can slip in a single earthquake and the seismic potential of a partially coupled megathrust interface. But it is not clear what physical or structural characteristics controlled either the rupture extent or the amplitude of slip in this earthquake. Here we use residual topography and gravity anomalies to constrain the geological structure of the overthrusting (upper) plate offshore northeast Japan. These data reveal an abrupt southwest-northeast-striking boundary in upper-plate structure, across which gravity modelling indicates a south-to-north increase in the density of rocks overlying the megathrust of 150-200 kilograms per cubic metre. We suggest that this boundary represents the offshore continuation of the Median Tectonic Line, which onshore juxtaposes geological terranes composed of granite batholiths (in the north) and accretionary complexes (in the south). The megathrust north of the Median Tectonic Line is interseismically locked, has a history of large earthquakes (18 with Mw > 7 since 1896) and produced peak slip exceeding 40 metres in the Tohoku-oki earthquake. In contrast, the megathrust south of this boundary has higher rates of interseismic creep, has not generated an earthquake with MJ > 7 (local magnitude estimated by the Japan Meteorological Agency) since 1923, and experienced relatively minor (if any) co-seismic slip in 2011. We propose that the structure and frictional properties of the overthrusting plate control megathrust coupling and seismogenic behaviour in northeast Japan.

  8. Time-predictable recurrence model for large earthquakes

    SciTech Connect

    Shimazaki, K.; Nakata, T.

    1980-04-01

    We present historical and geomorphological evidence of a regularity in earthquake recurrence at three different sites of plate convergence around the Japan arcs. The regularity shows that the larger an earthquake is, the longer is the following quiet period. In other words, the time interval between two successive large earthquakes is approximately proportional to the amount of coseismic displacement of the preceding earthquake and not of the following earthquake. The regularity enables us, in principle, to predict the approximate occurrence time of earthquakes. The data set includes 1) a historical document describing repeated measurements of water depth at Murotsu near the focal region of Nankaido earthquakes, 2) precise levelling and /sup 14/C dating of Holocene uplifted terraces in the southern boso peninsula facing the Sagami trough, and 3) similar geomorphological data on exposed Holocene coral reefs in Kikai Island along the Ryukyu arc.

  9. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  10. An updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Chen, Kuei-Pao; Tsai, Yi-Ben

    2016-03-01

    The main goal of this study was to develop an updated and refined catalog of earthquakes in Taiwan (1900-2014) with homogenized M w magnitudes that are compatible with the Harvard M w . We hope that such a catalog of earthquakes will provide a fundamental database for definitive studies of the distribution of earthquakes in Taiwan as a function of space, time, and magnitude, as well as for realistic assessments of seismic hazards in Taiwan. In this study, for completeness and consistency, we start with a previously published catalog of earthquakes from 1900 to 2006 with homogenized M w magnitudes. We update the earthquake data through 2014 and supplement the database with 188 additional events for the time period of 1900-1935 that were found in the literature. The additional data resulted in a lower magnitude from M w 5.5-5.0. The broadband-based Harvard M w , United States Geological Survey (USGS) M, and Broadband Array in Taiwan for Seismology (BATS) M w are preferred in this study. Accordingly, we use empirical relationships with the Harvard M w to transform our old converted M w values to new converted M w values and to transform the original BATS M w values to converted BATS M w values. For individual events, the adopted M w is chosen in the following order: Harvard M w > USGS M > converted BATS M w > new converted M w . Finally, we discover that use of the adopted M w removes a data gap at magnitudes greater than or equal to 5.0 in the original catalog during 1985-1991. The new catalog is now complete for M w ≥ 5.0 and significantly improves the quality of data for definitive study of seismicity patterns, as well as for realistic assessment of seismic hazards in Taiwan.

  11. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of

  12. Classification of magnitude 7 earthquakes which occurred after 1885 in Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Ishibe, T.; Satake, K.; Shimazaki, K.; Nishiyama, A.

    2010-12-01

    Tokyo Metropolitan area is situated in tectonically complex region; both the Pacific (PAC) and Philippine Sea (PHS) plates are subducting from east and south, respectively, beneath the Kanto region. As a result, various types of earthquakes occur in this region; i.e., shallow crustal earthquakes, intraplate (slab) earthquakes within PHS, within PAC, and interplate earthquakes between continental plate and PHS, and between PHS and PAC. Among these, the largest earthquakes are Kanto earthquakes (M~8) occurring between the continental plate and PHS. The average recurrence interval is estimated to be 200 - 400 years (Earthq. Res. Comm., 2004), and hence, urgency of the next Kanto earthquake is thought to be low considering the lapse time (~87 yrs.) from the most recent Kanto earthquake in 1923. However, urgency of the other types of earthquakes with M~7 is high; Earthq. Res. Comm. (2004) calculated the probability of occurrence during the next 30 years as 70 %, based on the facts that five M~7 earthquakes (i.e., the 1894 Meiji-Tokyo, 1895 and 1921 Ibaraki-Ken-Nanbu, 1922 Uraga channel and 1987 Chiba-Ken Toho-Oki earthquakes) occurred since 1885. However, types of earthquakes are not well known especially for the 1894 Meiji-Tokyo and 1895 Ibaragi-Ken-Nanbu earthquakes due to low quality of data. Thus, it is important to classify these earthquakes into above-described intraplate or interplate earthquakes and to estimate their occurrence frequency. Ishibe et al. (2009a, 2009b) compiled previous studies and data for these five earthquakes. In this study, we report the preliminary result of focal depth and mechanism for the 1895 and 1921 Ibaraki-Ken-Nanbu earthquakes. The epicenter of the 1895 Ibaraki-Ken-Nanbu earthquake (M 7.2; Utsu, 1979) is discussed by various studies (e.g., Usami, 1973; Ishibashi, 1975; Katsumata, 1975; Utsu, 1979). However, few studies have discussed the hypocentral depth. The hypocentral depth is estimated to be 75 ~ 85 km using S-P time at Tokyo

  13. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  14. Four Examples of Short-Term and Imminent Prediction of Earthquakes

    NASA Astrophysics Data System (ADS)

    zeng, zuoxun; Liu, Genshen; Wu, Dabin; Sibgatulin, Victor

    2014-05-01

    We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same

  15. Numerical Shake Prediction for Earthquake Early Warning: More Precise and Rapid Prediction even for Deviated Distribution of Ground Shaking of M6-class Earthquakes

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2015-12-01

    In many methods of the present EEW systems, hypocenter and magnitude are determined quickly, and then the strengths of ground motions are predicted using the hypocentral distance and magnitude based on a ground motion prediction equation (GMPE), which usually leads the prediction of concentric distribution. However, actual ground shaking is not always concentric, even when site amplification is corrected. At a common site, the strengths of shaking may be much different among earthquakes even when their hypocentral distances and magnitudes are almost the same. For some cases, PGA differs more than 10 times, which leads to imprecise prediction in EEW. Recently, Numerical Shake Prediction method was proposed (Hoshiba and Aoki, 2015), in which the present ongoing wavefield of ground shaking is estimated using data assimilation technique, and then future wavefield is predicted based on physics of wave propagation. Information of hypocentral location and magnitude is not required in this method. Because future is predicted from the present condition, it is possible to address the issue of the non-concentric distribution. Once the deviated distribution is actually observed in ongoing wavefield, future distribution is predicted accordingly to be non-concentric. We will indicate examples of M6-class earthquakes occurred at central Japan, in which strengths of shaking were observed to non-concentrically distribute. We will show their predictions using Numerical Shake Prediction method. The deviated distribution may be explained by inhomogeneous distribution of attenuation. Even without attenuation structure, it is possible to address the issue of non-concentric distribution to some extent once the deviated distribution is actually observed in ongoing wavefield. If attenuation structure is introduced, we can predict it before actual observation. The information of attenuation structure leads to more precise and rapid prediction in Numerical Shake Prediction method for EEW.

  16. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  17. 76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ....S. Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey. ACTION: Notice of Meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake... Government. The Council shall advise the Director of the U.S. Geological Survey on proposed...

  18. Advance Prediction of the March 11, 2011 Great East Japan Earthquake: A Missed Opportunity for Disaster Preparedness

    NASA Astrophysics Data System (ADS)

    Davis, C. A.; Keilis-Borok, V. I.; Kossobokov, V. G.; Soloviev, A.

    2012-12-01

    There was a missed opportunity for implementing important disaster preparedness measures following an earthquake prediction that was announced as an alarm in mid-2001. This intermediate-term middle-range prediction was the initiation of a chain of alarms that successfully detected the time, region, and magnitude range for the magnitude 9.0 March 11, 2011 Great East Japan Earthquake. The prediction chains were made using an algorithm called M8 and is the latest of many predictions tested worldwide for more than 25 years, the results of which show at least a 70% success rate. The earthquake detection could have been utilized to implement measures and improve earthquake preparedness in advance; unfortunately this was not done, in part due to the predictions' limited distribution and the lack of applying existing methods for using intermediate-term predictions to make decisions for taking action. The resulting earthquake and induced tsunami caused tremendous devastation to north-east Japan. Methods that were known in advance of the predication and further advanced during the prediction timeframe are presented in a scenario describing some possibilities on how the 2001 prediction may have been utilized to reduce significant damage, including damage to the Fukushima nuclear power plant, and to show prudent cost-effective actions can be taken if the prediction certainty is known, but not necessarily high. The purpose of this presentation is to show how the prediction information can be strategically used to enhance disaster preparedness and reduce future impacts from the world's largest earthquakes.

  19. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  20. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  1. Analysis of Italian Earthquake catalogs in the context of intermediate-term prediction problem

    NASA Astrophysics Data System (ADS)

    Romashkova, Leontina; Peresan, Antonella

    2013-06-01

    We perform a comparative analysis of regional and global earthquake catalogs currently available for the territory of Italy. We consider: (a) instrumental seismic catalogs provided by the Istituto Nazionale di Geofisica e Vulcanologia, Roma (INGV) for earthquake forecasting experiment in Italy within the Collaboratory for the Study of Earthquake Predictability (CSEP); (b) Global Hypocenters' Data provided by the USGS/NEIC, which is currently used in the real-time earthquake prediction experiment by CN and M8S algorithms in Italy, and (c) seismological Bulletin provided by the International Seismological Centre (ISC). We discuss advantages and shortcomings of these catalogs in the context of intermediate-term middle-range earthquake prediction problem in Italy, including the possibility of the catalog's combined or integrated use. Magnitude errors in the catalog can distort statistics of success-to-failure scoring and eventually falsify testing results. Therefore, the analysis of systematic and random errors in magnitude presented in Appendixes can be of significance in its own right.

  2. Giant Seismites and Megablock Uplift in the East African Rift: Evidence for Late Pleistocene Large Magnitude Earthquakes

    PubMed Central

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M.

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic ‘megablock complex’ that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions. PMID:26042601

  3. Giant seismites and megablock uplift in the East African Rift: evidence for Late Pleistocene large magnitude earthquakes.

    PubMed

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions.

  4. Preliminary Results on Earthquake Recurrence Intervals, Rupture Segmentation, and Potential Earthquake Moment Magnitudes along the Tahoe-Sierra Frontal Fault Zone, Lake Tahoe, California

    NASA Astrophysics Data System (ADS)

    Howle, J.; Bawden, G. W.; Schweickert, R. A.; Hunter, L. E.; Rose, R.

    2012-12-01

    Utilizing high-resolution bare-earth LiDAR topography, field observations, and earlier results of Howle et al. (2012), we estimate latest Pleistocene/Holocene earthquake-recurrence intervals, propose scenarios for earthquake-rupture segmentation, and estimate potential earthquake moment magnitudes for the Tahoe-Sierra frontal fault zone (TSFFZ), west of Lake Tahoe, California. We have developed a new technique to estimate the vertical separation for the most recent and the previous ground-rupturing earthquakes at five sites along the Echo Peak and Mt. Tallac segments of the TSFFZ. At these sites are fault scarps with two bevels separated by an inflection point (compound fault scarps), indicating that the cumulative vertical separation (VS) across the scarp resulted from two events. This technique, modified from the modeling methods of Howle et al. (2012), uses the far-field plunge of the best-fit footwall vector and the fault-scarp morphology from high-resolution LiDAR profiles to estimate the per-event VS. From this data, we conclude that the adjacent and overlapping Echo Peak and Mt. Tallac segments have ruptured coseismically twice during the Holocene. The right-stepping, en echelon range-front segments of the TSFFZ show progressively greater VS rates and shorter earthquake-recurrence intervals from southeast to northwest. Our preliminary estimates suggest latest Pleistocene/ Holocene earthquake-recurrence intervals of 4.8±0.9x103 years for a coseismic rupture of the Echo Peak and Mt. Tallac segments, located at the southeastern end of the TSFFZ. For the Rubicon Peak segment, northwest of the Echo Peak and Mt. Tallac segments, our preliminary estimate of the maximum earthquake-recurrence interval is 2.8±1.0x103 years, based on data from two sites. The correspondence between high VS rates and short recurrence intervals suggests that earthquake sequences along the TSFFZ may initiate in the northwest part of the zone and then occur to the southeast with a lower

  5. Earthquake prediction: The interaction of public policy and science

    USGS Publications Warehouse

    Jones, L.M.

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

  6. Earthquake prediction: the interaction of public policy and science.

    PubMed Central

    Jones, L M

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

  7. Earthquake prediction: the interaction of public policy and science.

    PubMed

    Jones, L M

    1996-04-30

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

  8. Dynamic triggering of low magnitude earthquakes in the Middle American Subduction Zone

    NASA Astrophysics Data System (ADS)

    Escudero, C. R.; Velasco, A. A.

    2010-12-01

    We analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. We use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw >7) earthquakes, we first identify local earthquakes that occurred before and after the mainshocks. We then group the local earthquakes within a cluster radius between 75 to 200 km. We obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as local cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Due to lateral variations of the dip along the subducted oceanic plate, we divide the Mexican subduction zone in four segments. We then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes. We identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. We find no depend of seismicity changes due to focal mainshock mechanism.

  9. Predictability of Great Earthquakes: The 25 April 2015 M7.9 Gorkha (Nepal)

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2015-12-01

    Understanding of seismic process in terms of non-linear dynamics of a hierarchical system of blocks-and-faults and deterministic chaos, has already led to reproducible intermediate-term middle-range prediction of the great and significant earthquakes. The technique based on monitoring charcteristics of seismic static in an area proportional to source size of incipient earthquake is confirmed at the confidence level above 99% by statistics of Global Testing in forward application from 1992 to the present. The semi-annual predictions determined for the next half-year by the algorithm M8 aimed (i) at magnitude 8+ earthquakes in 262 circles of investigation, CI's, each of 667-km radius and (ii) at magnitude 7.5+ earthquakes in 180 CI's, each of 427-km radius are communicated each January and July to the Global Test Observers (about 150 today). The pre-fixed location of CI's cover all seismic regions where the M8 algorithm could run in its original version that requires annual rate of activity of 16 or more main shocks. According to predictions released in January 2015 for the first half of 2015, the 25 April 2015 Nepal MwGCMT = 7.9 earthquake falls outside the Test area for M7.5+, while its epicenter is within the accuracy limits of the alarm area for M8.0+ that spread along 1300 km of Himalayas. We note that (i) the earthquake confirms identification of areas prone to strong earthquakes in Himalayas by pattern recognition (Bhatia et al. 1992) and (ii) it would have been predicted by the modified version of the M8 algorithm aimed at M7.5+. The modified version is adjusted to a low level of earthquake detection, about 10 main shocks per year, and is tested successfully by Mojarab et al. (2015) in application to the recent earthquakes in Eastern Anatolia (23 October 2011, M7.3 Van earthquake) and Iranian Plateau (16 April 2013, M7.7 Saravan and the 24 September 2013, M7.7 Awaran earthquakes).

  10. Multiple asperity model for earthquake prediction

    USGS Publications Warehouse

    Wyss, M.; Johnston, A.C.; Klein, F.W.

    1981-01-01

    Large earthquakes often occur as multiple ruptures reflecting strong variations of stress level along faults. Dense instrument networks with which the volcano Kilauea is monitored provided detailed data on changes of seismic velocity, strain accumulation and earthquake occurrence rate before the 1975 Hawaii 7.2-mag earthquake. During the ???4 yr of preparation time the mainshock source volume had separated into crustal volumes of high stress levels embedded in a larger low-stress volume, showing respectively high- and low-stress precursory anomalies. ?? 1981 Nature Publishing Group.

  11. Geotechnical effects of the 2015 magnitude 7.8 Gorkha, Nepal, earthquake and aftershocks

    USGS Publications Warehouse

    Moss, Robb E. S.; Thompson, Eric; Kieffer, D Scott; Tiwari, Binod; Hashash, Youssef M A; Acharya, Indra; Adhikari, Basanta; Asimaki, Domniki; Clahan, Kevin B.; Collins, Brian D.; Dahal, Sachindra; Jibson, Randall W.; Khadka, Diwakar; Macdonald, Amy; Madugo, Chris L M; Mason, H Benjamin; Pehlivan, Menzer; Rayamajhi, Deepak; Uprety, Sital

    2015-01-01

    This article summarizes the geotechnical effects of the 25 April 2015 M 7.8 Gorkha, Nepal, earthquake and aftershocks, as documented by a reconnaissance team that undertook a broad engineering and scientific assessment of the damage and collected perishable data for future analysis. Brief descriptions are provided of ground shaking, surface fault rupture, landsliding, soil failure, and infrastructure performance. The goal of this reconnaissance effort, led by Geotechnical Extreme Events Reconnaissance, is to learn from earthquakes and mitigate hazards in future earthquakes.

  12. Magnitude-based discrimination of man-made seismic events from naturally occurring earthquakes in Utah, USA

    NASA Astrophysics Data System (ADS)

    Koper, Keith D.; Pechmann, James C.; Burlacu, Relu; Pankow, Kristine L.; Stein, Jared; Hale, J. Mark; Roberson, Paul; McCarter, Michael K.

    2016-10-01

    We investigate using the difference between local (ML) and coda/duration (MC) magnitude to discriminate man-made seismic events from naturally occurring tectonic earthquakes in and around Utah. For 6846 well-located earthquakes in the Utah region, we find that ML-MC is on average 0.44 magnitude units smaller for mining-induced seismicity (MIS) than for tectonic seismicity (TS). Our interpretation of this observation is that MIS occurs within near-surface low-velocity layers that act as a waveguide and preferentially increase coda duration relative to peak amplitude, while the vast majority of TS occurs beneath the near-surface waveguide. A second data set of 3723 confirmed or probable explosions in the Utah region also has significantly lower ML-MC values than TS, likely for the same reason as the MIS. These observations suggest that ML-MC is useful as a depth indicator and could discriminate small explosions and mining-induced earthquakes from deeper, naturally occurring earthquakes at local-to-regional distances.

  13. Earthquake ground-motion prediction equations for eastern North America

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    2006-01-01

    New earthquake ground-motion relations for hard-rock and soil sites in eastern North America (ENA), including estimates of their aleatory uncertainty (variability) have been developed based on a stochastic finite-fault model. The model incorporates new information obtained from ENA seismographic data gathered over the past 10 years, including three-component broadband data that provide new information on ENA source and path effects. Our new prediction equations are similar to the previous ground-motion prediction equations of Atkinson and Boore (1995), which were based on a stochastic point-source model. The main difference is that high-frequency amplitudes (f ??? 5 Hz) are less than previously predicted (by about a factor of 1.6 within 100 km), because of a slightly lower average stress parameter (140 bars versus 180 bars) and a steeper near-source attenuation. At frequencies less than 5 Hz, the predicted ground motions from the new equations are generally within 25% of those predicted by Atkinson and Boore (1995). The prediction equations agree well with available ENA ground-motion data as evidenced by near-zero average residuals (within a factor of 1.2) for all frequencies, and the lack of any significant residual trends with distance. However, there is a tendency to positive residuals for moderate events at high frequencies in the distance range from 30 to 100 km (by as much as a factor of 2). This indicates epistemic uncertainty in the prediction model. The positive residuals for moderate events at < 100 km could be eliminated by an increased stress parameter, at the cost of producing negative residuals in other magnitude-distance ranges; adjustment factors to the equations are provided that may be used to model this effect.

  14. Earthquake prediction on boundaries of the Arabian Plate: premonitory chains of small earthquakes

    NASA Astrophysics Data System (ADS)

    Yaniv, M.; Agnon, A.; Shebalin, P.

    2009-12-01

    The RTP method is a probabilistic prediction method for strong earthquakes (Keilis-Borok et al., 2004). Based on simple pattern recognition algorithms and tuned on historical seismic catalogs, RTP has been running as a prediction in advance experiment since 1997. We present a similar system aimed at improving the algorithm and tuning it to regional catalogs, focusing on the Arabian Plate. RTP is based on recognition of "Earthquake chains", microseismic patterns that capture a rise in activity and in correlation range. A chain is defined as a closed set of "neighbor events" with epicenters and times of occurrences separated by less than a spatial parameter R0 and a temporal parameter τ, respectively. The seismic catalog can be viewed as a non-directional graph, with earthquakes as vertices, neighbor pairs as edges and chains as connected components of the graph. Various algorithms were tried, based on different concepts. Some using graph theory concepts, and others focusing on the data structure in the catalog. All algorithms aim at recognizing neighboring pairs of events, and combining the pairs into chains.They relies on a number of parameters: -Minimum length for a valid chain L0 -Weights for the spatial and temporal thresholds -Target magnitude: the minimum magnitude we aim to predict -Cutoff value: the minimum magnitude to be taken into account The output for an algorithms is a set of chains. The list is filtered for chains longer than L0. The 2D parameter space was mapped. For every pair of R0 and τ three characteristics were calculated: -Number of chains found -Mean number of events in a chain -Mean size (Max distance between events in a chain) of chains Each of these plots as a surface, showing dependance on the parameters R0 and τ. The most recent version of the algorithm was run on the NEIC catalog. It recognizes three chains longer than 15 events, with Target events, shown in the figure. In the GII catalog only two chains are found. Both start with a

  15. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  16. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  17. Estimating the magnitude of prediction uncertainties for the APLE model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  18. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Rizza, M.; Ritz, J.-F.; Braucher, R.; Vassallo, R.; Prentice, C.; Mahan, S.; McGill, S.; Chauvet, A.; Marco, S.; Todbileg, M.; Demberel, S.; Bourles, D.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans-particularly well preserved in the arid environment of the Gobi region-allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is ~1 mm yr-1 along the WIB and EIB segments and ~0.5 mm yr-1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78-7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of ~2500-5200 yr for past

  19. Incorporating Love- and Rayleigh-wave magnitudes, unequal earthquake and explosion variance assumptions and interstation complexity for improved event screening

    SciTech Connect

    Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia; Shumway, Robert

    2009-01-01

    Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleigh M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the

  20. New models for frequency content prediction of earthquake records based on Iranian ground-motion data

    NASA Astrophysics Data System (ADS)

    Yaghmaei-Sabegh, Saman

    2015-10-01

    This paper presents the development of new and simple empirical models for frequency content prediction of ground-motion records to resolve the assumed limitations on the useable magnitude range of previous studies. Three period values are used in the analysis for describing the frequency content of earthquake ground-motions named as the average spectral period ( T avg), the mean period ( T m), and the smoothed spectral predominant period ( T 0). The proposed models could predict these scalar indicators as function of magnitude, closest site-to-source distance and local site condition. Three site classes as rock, stiff soil, and soft soil has been considered in the analysis. The results of the proposed relationships have been compared with those of other published models. It has been found that the resulting regression equations can be used to predict scalar frequency content estimators over a wide range of magnitudes including magnitudes below 5.5.

  1. Scientific goals of the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Thatcher, W.

    1988-01-01

    Several unique circumstances of the Parkfield experiment provide unprecedented opportunities for significant advances in understanding the mechanics of earthquakes. to our knowledge, there is no other seismic zone anywhere where the time, place, and magnitude of an impending earthquake are specified as precisely. Moreover, the epicentral region is located on continental crust, is readily accessible, and can support a range of dense monitoring networks that are sited either on or very close to the expected rupture surface. As a result, the networks located at Parkfield are several orders of magnitude more sensitive than any previously deployed for monitoring earthquake precursors (a preearthquake change in strain, seismicity, and other geophysical parameters). In this respect the design of the Parkfield experiment resembles the rationale for constructing a new, more powerful nuclear particle accelerator:in both cases increased capabilities will test existing theories, reveal new phenomena, and suggest new research directions. 

  2. Strong ground motion prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Z. R.; Tao, X. X.; Cui, A. P.

    2015-09-01

    For regions lack of strong ground motion records, a method is developed to predict strong ground motion by small earthquake records from local broadband digital earthquake networks. Sichuan and Yunnan regions, located in southwestern China, are selected as the targets. Five regional source and crustal medium parameters are inversed by micro-Genetic Algorithm. These parameters are adopted to predict strong ground motion for moment magnitude (Mw) 5.0, 6.0 and 7.0. Strong ground motion data are compared with the results, most of the result pass through ideally the data point plexus, except the case of Mw 7.0 in Sichuan region, which shows an obvious slow attenuation. For further application, this result is adopted in probability seismic hazard assessment (PSHA) and near-field strong ground motion synthesis of the Wenchuan Earthquake.

  3. A Magnitude 7.1 Earthquake in the Tacoma Fault Zone-A Plausible Scenario for the Southern Puget Sound Region, Washington

    USGS Publications Warehouse

    Gomberg, Joan; Sherrod, Brian; Weaver, Craig; Frankel, Art

    2010-01-01

    The U.S. Geological Survey and cooperating scientists have recently assessed the effects of a magnitude 7.1 earthquake on the Tacoma Fault Zone in Pierce County, Washington. A quake of comparable magnitude struck the southern Puget Sound region about 1,100 years ago, and similar earthquakes are almost certain to occur in the future. The region is now home to hundreds of thousands of people, who would be at risk from the shaking, liquefaction, landsliding, and tsunamis caused by such an earthquake. The modeled effects of this scenario earthquake will help emergency planners and residents of the region prepare for future quakes.

  4. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  5. The marine-geological fingerprint of the 2011 Magnitude 9 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Strasser, M.; Ikehara, K.; Usami, K.; Kanamatsu, T.; McHugh, C. M.

    2015-12-01

    The 2011 Tohoku-oki earthquake was the first great subduction zone earthquake, for which the entire activity was recorded by offshore geophysical, seismological and geodetic instruments and for which direct observation for sediment re-suspension and re-deposition was documented across the entire margin. Furthermore, the resulting tsunami and subsequent tragic incident at Fukushima nuclear power station, has induced short-lived radionuclides which can be used for tracer experiments in the natural offshore sedimentary systems. Here we present a summary on the present knowledge on the 2011 event beds in the offshore environment and integrate data from offshore instruments with sedimentological, geochemical and physical property data on core samples to report various types of event deposits resulting from earthquake-triggered submarine landslides, downslope sediment transport by turbidity currents, surficial sediment remobilization from the agitation and resuspension of unconsolidated surface sediments by the earthquake ground motion, as well as tsunami-induced sediment transport from shallow waters to the deep sea. The rapidly growing data set from offshore Tohoku further allows for discussion about (i) what we can learn from this well-documented event for general submarine paleoseismology aspects and (ii) potential of the Japan Trench to use the geological record of the Japan Trench to reconstruct a long-term history of great subduction zone earthquakes.

  6. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  7. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  8. Earthquake prediction rumors can help in building earthquake awareness: the case of May the 11th 2011 in Rome (Italy)

    NASA Astrophysics Data System (ADS)

    Amato, A.; Arcoraci, L.; Casarotti, E.; Cultrera, G.; Di Stefano, R.; Margheriti, L.; Nostro, C.; Selvaggi, G.; May-11 Team

    2012-04-01

    Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquake prediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquake prediction credibility. Given the echo of this earthquake prediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

  9. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Prentice, Carol S.; Rizza, M.; Ritz, J.F.; Baucher, R.; Vassallo, R.; Mahan, S.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans—particularly well preserved in the arid environment of the Gobi region—allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is∼1 mm yr–1 along the WIB and EIB segments and∼0.5 mm yr–1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78–7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of∼2500

  10. Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults

    NASA Astrophysics Data System (ADS)

    McGuire, J. J.; Boettcher, M. S.; Jordan, T. H.

    2004-12-01

    A predominant view of continental seismicity postulates that all earthquakes initiate in a similar manner regardless of their eventual size and that earthquake triggering can be described by an Endemic Type Aftershock Sequence (ETAS) model [e.g. Ogata, 1988, Helmstetter and Sorenette 2002]. These null hypotheses cannot be rejected as an explanation for the relative abundances of foreshocks and aftershocks to large earthquakes in California [Helmstetter et al., 2003]. An alternative location for testing this hypothesis is mid-ocean ridge transform faults (RTFs), which have many properties that are distinct from continental transform faults: most plate motion is accommodated aseismically, many large earthquakes are slow events enriched in low-frequency radiation, and the seismicity shows depleted aftershock sequences and high foreshock activity. Here we use the 1996-2001 NOAA-PMEL hydroacoustic seismicity catalog for equatorial East Pacific Rise transform faults to show that the foreshock/aftershock ratio is two orders of magnitude greater than the ETAS prediction based on global RTF aftershock abundances. We can thus reject the null hypothesis that there is no fundamental distinction between foreshocks, mainshocks, and aftershocks on RTFs. We further demonstrate (retrospectively) that foreshock sequences on East Pacific Rise transform faults can be used to achieve statistically significant short-term prediction of large earthquakes (magnitude ≥ 5.4) with good spatial (15-km) and temporal (1-hr) resolution using the NOAA-PMEL catalogs. Our very simplistic approach produces a large number of false alarms, but it successfully predicts the majority (70%) of M≥5.4 earthquakes while covering only a tiny fraction (0.15%) of the total potential space-time volume with alarms. Therefore, it achieves a large probability gain (about a factor of 500) over random guessing, despite not using any near field data. The predictability of large EPR transform earthquakes suggests

  11. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  12. Prediction of long-period ground motions from huge subduction earthquakes in Osaka, Japan

    NASA Astrophysics Data System (ADS)

    Kawabe, H.; Kamae, K.

    2008-04-01

    There is a high possibility of reoccurrence of the Tonankai and Nankai earthquakes along the Nankai Trough in Japan. It is very important to predict the long-period ground motions from the next Tonankai and Nankai earthquakes with moment magnitudes of 8.1 and 8.4, respectively, to mitigate their disastrous effects. In this study, long-period (>2.5 s) ground motions were predicted using an earthquake scenario proposed by the Headquarters for Earthquake Research Promotion in Japan. The calculations were performed using a fourth-order finite difference method with a variable spacing staggered-grid in the frequency range 0.05 0.4 Hz. The attenuation characteristics ( Q) in the finite difference simulations were assumed to be proportional to frequency ( f) and S-wave velocity ( V s) represented by Q = f · V s / 2. Such optimum attenuation characteristic for the sedimentary layers in the Osaka basin was obtained empirically by comparing the observed motions during the actual M5.5 event with the modeling results. We used the velocity structure model of the Osaka basin consisting of three sedimentary layers on bedrock. The characteristics of the predicted long-period ground motions from the next Tonankai and Nankai earthquakes depend significantly on the complex thickness distribution of the sediments inside the basin. The duration of the predicted long-period ground motions in the city of Osaka is more than 4 min, and the largest peak ground velocities (PGVs) exceed 80 cm/s. The predominant period is 5 to 6 s. These preliminary results indicate the possibility of earthquake damage because of future subduction earthquakes in large-scale constructions such as tall buildings, long-span bridges, and oil storage tanks in the Osaka area.

  13. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  14. Discrimination of DPRK M5.1 February 12th, 2013 Earthquake as Nuclear Test Using Analysis of Magnitude, Rupture Duration and Ratio of Seismic Energy and Moment

    NASA Astrophysics Data System (ADS)

    Salomo Sianipar, Dimas; Subakti, Hendri; Pribadi, Sugeng

    2015-04-01

    On February 12th, 2013 morning at 02:57 UTC, there had been an earthquake with its epicenter in the region of North Korea precisely around Sungjibaegam Mountains. Monitoring stations of the Preparatory Commission for the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO) and some other seismic network detected this shallow seismic event. Analyzing seismograms recorded after this event can discriminate between a natural earthquake or an explosion. Zhao et. al. (2014) have been successfully discriminate this seismic event of North Korea nuclear test 2013 from ordinary earthquakes based on network P/S spectral ratios using broadband regional seismic data recorded in China, South Korea and Japan. The P/S-type spectral ratios were powerful discriminants to separate explosions from earthquake (Zhao et. al., 2014). Pribadi et. al. (2014) have characterized 27 earthquake-generated tsunamis (tsunamigenic earthquake or tsunami earthquake) from 1991 to 2012 in Indonesia using W-phase inversion analysis, the ratio between the seismic energy (E) and the seismic moment (Mo), the moment magnitude (Mw), the rupture duration (To) and the distance of the hypocenter to the trench. Some of this method was also used by us to characterize the nuclear test earthquake. We discriminate this DPRK M5.1 February 12th, 2013 earthquake from a natural earthquake using analysis magnitude mb, ms and mw, ratio of seismic energy and moment and rupture duration. We used the waveform data of the seismicity on the scope region in radius 5 degrees from the DPRK M5.1 February 12th, 2013 epicenter 41.29, 129.07 (Zhang and Wen, 2013) from 2006 to 2014 with magnitude M ≥ 4.0. We conclude that this earthquake was a shallow seismic event with explosion characteristics and can be discriminate from a natural or tectonic earthquake. Keywords: North Korean nuclear test, magnitude mb, ms, mw, ratio between seismic energy and moment, ruptures duration

  15. Risk and return: evaluating Reverse Tracing of Precursors earthquake predictions

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2010-09-01

    In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

  16. Imaging of the Rupture Zone of the Magnitude 6.2 Karonga Earthquake of 2009 using Electrical Resistivity Surveys

    NASA Astrophysics Data System (ADS)

    Clappe, B.; Hull, C. D.; Dawson, S.; Johnson, T.; Laó-Dávila, D. A.; Abdelsalam, M. G.; Chindandali, P. R. N.; Nyalugwe, V.; Atekwana, E. A.; Salima, J.

    2015-12-01

    The 2009 Karonga earthquakes occurred in an area where active faults had not previously been known to exist. Over 5000 buildings were destroyed in the area and at least 4 people lost their lives as a direct result of the 19th of December magnitude 6.2 earthquake. The earthquake swarms occurred in the hanging wall of the main Livingstone border fault along segmented, west dipping faults that are synthetic to the Livingstone fault. The faults have a general trend of 290-350 degrees. Electrical resistivity surveys were conducted to investigate the nature of known rupture and seismogenic zones that resulted from the 2009 earthquakes in the Karonga, Malawi area. The goal of this study was to produce high-resolution images below the epicenter and nearby areas of liquefaction to determine changes in conductivity/resistivity signatures in the subsurface. An Iris Syscal Pro was utilized to conduct dipole-dipole resistivity measurements below the surface of soil at farmlands at 6 locations. Each transect was 710 meters long and had an electrode spacing of 10 meters. RES2DINV software was used to create 2-D inversion images of the rupture and seismogenic zones. We were able to observe three distinct geoelectrical layers to the north of the rupture zone and two south of the rupture zone with the discontinuity between the two marked by the location of the surface rupture. The rupture zone is characterized by ~80-meter wide area of enhanced conductivity, 5 m thick underlain by a more resistive layer dipping west. We interpret this to be the result of fine grain sands and silts brought up from depth to near surface as a result of shearing along the fault rupture or liquefaction. Electrical resistivity surveys are valuable, yet under-utilized tools for imaging near-surface effects of earthquakes.

  17. Coseismic and postseismic velocity changes detected by Passive Image Interferometry: Comparison of five strong earthquakes (magnitudes 6.6 - 6.9) and one great earthquake (magnitude 9.0) in Japan

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Wegler, Ulrich; Shiomi, Katsuhiko; Nakahara, Hisashi

    2015-04-01

    We analyzed ambient seismic noise near five strong onshore crustal earthquakes in Japan as well as for the great Tohoku offshore earthquake. Green's functions were computed for station pairs (cross-correlations) as well as for different components of a single station (single-station cross-correlations) using a filter bank of five different bandpass filters between 0.125 Hz and 4 Hz. Noise correlations for different time periods were treated as repeated measurements and coda wave interferometry was applied to estimate coseismic as well as postseismic velocity changes. We used all possible component combinations and analyzed periods from a minimum of 3.5 years (Iwate region) up to 8.25 years (Niigata region). Generally, the single-station cross-correlation and station pair cross-correlation show similar results, but the single station method is more reliable for higher frequencies (f > 0.5 Hz), whereas the station pair method is more reliable for lower frequencies (f < 0.5 Hz). For all six earthquakes we found a similar behavior of the velocity change curve as a function of time. We observe coseismic velocity drops at the times of the respective earthquakes followed by postseismic recovery for all earthquakes. Additionally, most stations show a seasonal velocity variation. This seasonal variation was removed by a curve fitting and velocity changes of tectonic origin only were analyzed in our study. The postseismic velocity changes can be described by an exponential recovery model, where for all areas about half of the coseismic velocity drops recover on a time scale of the order of half a year. The other half of the coseismic velocity drops remain as a permanent change. The coseismic velocity drops are stronger at larger frequencies for all earthquakes. We assume that these changes are concentrated in the superficial layers but for some stations can also reach a few kilometers of depth. The coseismic velocity drops for the strong earthquakes (magnitudes 6.6 - 6

  18. The 7.2 magnitude earthquake, November 1975, Island of Hawaii

    USGS Publications Warehouse

    1976-01-01

    It was centered about 5 km beneath the Kalapana area on the southeastern coast of Hawaii, the largest island of the Hawaiian chain (Fig. 1) and was preceded by numerous foreshocks. The event was accompanied, or followed shortly, by a tsunami, large-scale ground movemtns, hundreds of aftershocks, an eruption in the summit caldera of Kilauea Volcano. The earthquake and the tsunami it generated produced about 4.1 million dollars in property damage, and the tsumani caused two deaths. Although we have some preliminary findings about the cause and effects of the earthquake, detailed scientific investigations will take many more months to complete. This article is condensed from a recent preliminary report (Tillings an others 1976)

  19. Predicting earthquakes by analyzing accelerating precursory seismic activity

    USGS Publications Warehouse

    Varnes, D.J.

    1989-01-01

    During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

  20. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity.

  1. Application of decision trees to the analysis of soil radon data for earthquake prediction.

    PubMed

    Zmazek, B; Todorovski, L; Dzeroski, S; Vaupotic, J; Kobal, I

    2003-06-01

    Different regression methods have been used to predict radon concentration in soil gas on the basis of environmental data, i.e. barometric pressure, soil temperature, air temperature and rainfall. Analyses of the radon data from three stations in the Krsko basin, Slovenia, have shown that model trees outperform other regression methods. A model has been built which predicts radon concentration with a correlation of 0.8, provided it is influenced only by the environmental parameters. In periods with seismic activity this correlation is much lower. This decrease in predictive accuracy appears 1-7 days before earthquakes with local magnitude 0.8-3.3.

  2. Probability of inducing given-magnitude earthquakes by perturbing finite volumes of rocks

    NASA Astrophysics Data System (ADS)

    Shapiro, Serge A.; Krüger, Oliver S.; Dinske, Carsten

    2013-07-01

    Fluid-induced seismicity results from an activation of finite rock volumes. The finiteness of perturbed volumes influences frequency-magnitude statistics. Previously we observed that induced large-magnitude events at geothermal and hydrocarbon reservoirs are frequently underrepresented in comparison with the Gutenberg-Richter law. This is an indication that the events are more probable on rupture surfaces contained within the stimulated volume. Here we theoretically and numerically analyze this effect. We consider different possible scenarios of event triggering: rupture surfaces located completely within or intersecting only the stimulated volume. We approximate the stimulated volume by an ellipsoid or cuboid and derive the statistics of induced events from the statistics of random thin flat discs modeling rupture surfaces. We derive lower and upper bounds of the probability to induce a given-magnitude event. The bounds depend strongly on the minimum principal axis of the stimulated volume. We compare the bounds with data on seismicity induced by fluid injections in boreholes. Fitting the bounds to the frequency-magnitude distribution provides estimates of a largest expected induced magnitude and a characteristic stress drop, in addition to improved estimates of the Gutenberg-Richter a and b parameters. The observed frequency-magnitude curves seem to follow mainly the lower bound. However, in some case studies there are individual large-magnitude events clearly deviating from this statistic. We propose that such events can be interpreted as triggered ones, in contrast to the absolute majority of the induced events following the lower bound.

  3. Kinematic earthquake source inversion and tsunami runup prediction with regional geophysical data

    NASA Astrophysics Data System (ADS)

    Melgar, D.; Bock, Y.

    2015-05-01

    Rapid near-source earthquake source modeling relying only on strong motion data is limited by instrumental offsets and magnitude saturation, adversely affecting subsequent tsunami prediction. Seismogeodetic displacement and velocity waveforms estimated from an optimal combination of high-rate GPS and strong motion data overcome these limitations. Supplementing land-based data with offshore wave measurements by seafloor pressure sensors and GPS-equipped buoys can further improve the image of the earthquake source and prediction of tsunami extent, inundation, and runup. We present a kinematic source model obtained from a retrospective real-time analysis of a heterogeneous data set for the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake. Our model is consistent with conceptual models of subduction zones, exhibiting depth dependent behavior that is quantified through frequency domain analysis of slip rate functions. The stress drop distribution is found to be significantly more correlated with aftershock locations and mechanism types when off-shore data are included. The kinematic model parameters are then used as initial conditions in a fully nonlinear tsunami propagation analysis. Notably, we include the horizontal advection of steeply sloping bathymetric features. Comparison with post-event on-land survey measurements demonstrates that the tsunami's inundation and runup are predicted with considerable accuracy, only limited in scale by the resolution of available topography and bathymetry. We conclude that it is possible to produce credible and rapid, kinematic source models and tsunami predictions within minutes of earthquake onset time for near-source coastal regions most susceptible to loss of life and damage to critical infrastructure, regardless of earthquake magnitude.

  4. Did the November 17, 2009 Queen Charlotte Island (QCI) earthquake fill a predicted seismic gap?

    NASA Astrophysics Data System (ADS)

    Vasudevan, K.; Eaton, D. W.; Iverson, A.

    2010-12-01

    Seismicity in the Queen Charlotte Fault (QCF) zone occurs along the transform boundary between the Pacific and North American lithospheric plates and is the region where the largest recorded earthquake in Canada (Ms = 8.1) occurred, on August 22, 1949. Right-lateral relative motion across the QCF, in conjunction with minor convergence, has been suggested to play a role in the source characteristics of earthquakes in this region. A segment of the QCF between the inferred rupture zone of the 1949 earthquake and that of a magnitude 7.4 earthquake in 1970 has been identified as seismic gap that, if fully ruptured, is capable of producing a M ~ 7 earthquake. On November 17, 2009 a Mw 6.6 earthquake occurred within this seismicity gap and was well recorded by regional seismograph stations in Canada and the U.S., including three recently installed temporary broadband seismograph stations in northern Alberta. The distribution of aftershocks from the 2009 earthquake, as well as maps of calculated Coulomb stresses from the previous events, are compatible with the seismic gap hypothesis. In addition, we have computed a seismic moment tensor for this event by least-squares waveform fitting, primarily surface waves, which shows a predominantly strike-slip focal mechanism. Our integrated results of source parameters and Coulomb failure stress changes provide the first direct confirmation that the 2009 event occurred within the predicted seismic gap between the 1949 and 1970 earthquakes. This evidence is important for hazard assessment in this region where offshore oil and gas drilling has been proposed.

  5. Sun-earth environment study to understand earthquake prediction

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.

    2007-05-01

    Earthquake prediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquake prediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

  6. Earthquake prediction in Japan and natural time analysis of seismicity

    NASA Astrophysics Data System (ADS)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  7. Slip Rates, Recurrence Intervals and Earthquake Event Magnitudes for the southern Black Mountains Fault Zone, southern Death Valley, California

    NASA Astrophysics Data System (ADS)

    Fronterhouse Sohn, M.; Knott, J. R.; Bowman, D. D.

    2005-12-01

    The normal-oblique Black Mountain Fault zone (BMFZ) is part of the Death Valley fault system. Strong ground-motion generated by earthquakes on the BMFZ poses a serious threat to the Las Vegas, NV area (pop. ~1,428,690), the Death Valley National Park (max. pop. ~20,000) and Pahrump, NV (pop. 30,000). Fault scarps offset Holocene alluvial-fan deposits along most of the 80-km length of the BMFZ. However, slip rates, recurrence intervals, and event magnitudes for the BMFZ are poorly constrained due to a lack of age control. Also, Holocene scarp heights along the BMFZ range from <1 m to >6 m suggesting that geomorphic sections have different earthquake histories. Along the southernmost section, the BMFZ steps basinward preserving three post-late Pleistocene fault scarps. Surveys completed with a total station theodolite show scarp heights of 5.5, 5.0 and 2 meters offsetting the late Pleistocene, early to middle Holocene, to middle-late Holocene surfaces, respectively. Regression plots of vertical offset versus maximum scarp angle suggest event ages of <10 - 2 ka with a post-late Pleistocene slip rate of 0.1mm/yr to 0.3 mm/yr and recurrence of <3300 years/event. Regression equations for the estimated geomorphically constrained rupture length of the southernmost section and surveyed event displacements provides estimated moment magnitudes (Mw) between 6.6 and 7.3 for the BMFZ.

  8. [Comment on “A misuse of public funds: U.N. support for geomagnetic forecasting of earthquakes and meteorological disasters”] Comment: Earthquake prediction is worthy of study

    NASA Astrophysics Data System (ADS)

    Freund, Friedmann

    Imagine a densely populated region in the contiguous United States haunted over the past 25 years by nine big earthquakes of magnitudes 5.5 to 7.8, killing hundreds of thousands of people. Imagine further that in a singularly glorious instance a daring prediction effort, based on some scientifically poorly understood natural phenomena, led to the evacuation of a major city just 13 hours before an M = 7.8 earthquake hit. None of the inhabitants of the evacuated city died, while in the surrounding, nonevacuated communities 240,000 were killed and about 600,000 seriously injured. Imagine at last that, tragically, the prediction of the next earthquake of a similar magnitude failed, as well as the following one, at great loss of life.If this were an American scenario, the scientific community and the public at large would buzz with the glory of that one successful, life-saving earthquake prediction effort and with praise for American ingenuity The fact that the next predictions failed would likely have energized the public, the political bodies, the scientists, and the funding agencies alike to go after a recalcitrant Earth, to poke into her deep secrets with all means at the scientists' disposal, and to retrieve even the faintest signals that our restless planet may send out prior to unleashing her deadly punches.

  9. Paleomagnetic Definition of Crustal Segmentation, Quaternary Block Rotations and Limits on Earthquake Magnitudes in Northwestern Metropolitan Los Angeles

    NASA Astrophysics Data System (ADS)

    Levi, S.; Yeats, R. S.; Nabelek, J.

    2004-12-01

    Paleomagnetic studies of the Pliocene-Quaternary Saugus Formation, in the San Fernando Valley and east Ventura Basin, show that the crust is segmented into small domains, 10-20 km in linear dimension, identified by rotation of reverse-fault blocks. Two domains, southwest and adjacent to the San Gabriel fault, are rotated clockwise: 1) The Magic Mountain domain, 30 +/- 5 degrees, and 2) the Merrick syncline domain, 34 +/- 6 degrees. The Magic Mountain domain has rotated since 1 Ma. Both rotated sections occur in hangingwalls of active reverse faults: the Santa Susana and San Fernando faults, respectively. Two additional domains are unrotated: 1) The Van Norman Lake domain, directly south of the Santa Susana fault, and 2) the Soledad Canyon domain in the San Gabriel block immediately across the San Gabriel fault from Magic Mountain, suggesting that the San Gabriel fault might be a domain boundary. Plio-Pleistocene fragmentation and clockwise rotations continue at present, based on geodetic data, and represent crustal response to diffuse, oblique dextral shearing within the San Andreas fault system. The horizontal dimensions of the blocks are similar to the thickness of the seismogenic layer. The maximum magnitude of an earthquake based on this size of blocks is Mw = 6.7, comparable to the 1971 San Fernando and 1994 Northridge earthquakes and consistent with paleoseismic trenching and surface ruptures of the 1971 earthquake. The paleomagnetic results suggest that the blocks have retained their configuration for the past \\~ 0.8 million years. It is unlikely that multiple blocks in the study area combined to trigger much larger shocks during this period, in contrast to adjacent regions where events with magnitudes greater than 7 have been postulated based on paleoseismic excavations.

  10. Radon measurements for earthquake prediction in northern India

    SciTech Connect

    Singh, B.; Virk, H.S. )

    1992-01-01

    Earthquake prediction is based on the observation of precursory phenomena, and radon has emerged as a useful precursor in recent years. In India, where 55% of the land area is in active seismic zones, considerable destruction was caused by the earthquakes of Kutch (1819), Shillong (1897), Kangra (1905), Bihar-Nepal (1934), Assam (1956), Koyna (1967), Bihar-Nepal (1988), and Uttarkashi (1991). Radon ([sup 222]Rn) is produced by the decay of radium ([sup 226]Ra) in the uranium decay series and is present in trace amounts almost everywhere on the earth, being distributed in soil, groundwater, and lower levels of atmosphere. The purpose of this study is to find the value in radon monitoring for earthquake prediction.

  11. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes.

    PubMed

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016)10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  12. Magnitude-dependent epidemic-type aftershock sequences model for earthquakes

    NASA Astrophysics Data System (ADS)

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-04-01

    We propose a version of the pure temporal epidemic type aftershock sequences (ETAS) model: the ETAS model with correlated magnitudes. As for the standard case, we assume the Gutenberg-Richter law to be the probability density for the magnitudes of the background events. Instead, the magnitude of the triggered shocks is assumed to be probabilistically dependent on that of the relative mother events. This probabilistic dependence is motivated by some recent works in the literature and by the results of a statistical analysis made on some seismic catalogs [Spassiani and Sebastiani, J. Geophys. Res. 121, 903 (2016), 10.1002/2015JB012398]. On the basis of the experimental evidences obtained in the latter paper for the real catalogs, we theoretically derive the probability density function for the magnitudes of the triggered shocks proposed in Spassiani and Sebastiani and there used for the analysis of two simulated catalogs. To this aim, we impose a fundamental condition: averaging over all the magnitudes of the mother events, we must obtain again the Gutenberg-Richter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The ETAS model with correlated magnitudes is then theoretically analyzed here. In particular, we use the tool of the probability generating function and the Palm theory, in order to derive an approximation of the probability of zero events in a small time interval and to interpret the results in terms of the interevent time between consecutive shocks, the latter being a very useful random variable in the assessment of seismic hazard.

  13. Scientific investigation of macroscopic phenomena before the 2008 Wenchuan earthquake and its implication to prediction and tectonics

    NASA Astrophysics Data System (ADS)

    Huang, F.; Yang, Y.; Pan, B.

    2013-12-01

    tectonic/faults near the epicentral area. According to the statistic relationship, VI-VII degree intensity in meizoseismal area is equivalent to Magnitude 5. That implied that, generally, macroscopic anomaly easily occurred before earthquakes with magnitude more than 5 in the near epicenteral area. This information can be as pendent clues of earthquake occurrence in a tectonic area. Based on the above scientific investigation and statistic research we recalled other historical earthquakes occurred from 1937 to 1996 in Chinese mainland and got the similar results (Compilation of macroscopic anomalies before earthquakes, published by seismological press, 2009). This can be as one of important basic data to earthquake prediction. This work was supported by NSFC project No. 41274061.

  14. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  15. Determination of Love- and Rayleigh-Wave Magnitudes for Earthquakes and Explosions and Other Studies

    DTIC Science & Technology

    2012-12-30

    However, tectonic release (Toksöz and Kehrer, 1972) near the explosion source often results in Love Approved for public release; distribution is...bias in magnitude estimation. Significant heterogeneities along the plate boundaries are the most likely causes of such scattering. We have applied...areas with strong lateral velocity variations, including active tectonic belts, continental shelves etc. Strike-slip mechanisms are usually better

  16. Three Millennia of Seemingly Time-Predictable Earthquakes, Tell Ateret

    NASA Astrophysics Data System (ADS)

    Agnon, Amotz; Marco, Shmuel; Ellenblum, Ronnie

    2014-05-01

    Among various idealized recurrence models of large earthquakes, the "time-predictable" model has a straightforward mechanical interpretation, consistent with simple friction laws. On a time-predictable fault, the time interval between an earthquake and its predecessor is proportional to the slip during the predecessor. The alternative "slip-predictable" model states that the slip during earthquake rupture is proportional to the preceding time interval. Verifying these models requires extended records of high precision data for both timing and amount of slip. The precision of paleoearthquake data can rarely confirm or rule out predictability, and recent papers argue for either time- or slip-predictable behavior. The Ateret site, on the trace of the Dead Sea fault at the Jordan Gorge segment, offers unique precision for determining space-time patterns. Five consecutive slip events, each associated with deformed and offset sets of walls, are correlated with historical earthquakes. Two correlations are based on detailed archaeological, historical, and numismatic evidence. The other three are tentative. The offsets of three of the events are determined with high precision; the other two are not as certain. Accepting all five correlations, the fault exhibits a striking time-predictable behavior, with a long term slip rate of 3 mm/yr. However, the 30 October 1759 ~0.5 m rupture predicts a subsequent rupture along the Jordan Gorge toward the end of the last century. We speculate that earthquakres on secondary faults (the 25 November 1759 on the Rachaya branch and the 1 January 1837 on the Roum branch, both M≥7) have disrupted the 3 kyr time-predictable pattern.

  17. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Small Business Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of small businesses in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each business establishment size category to each Instrumental Intensity level. The analysis concerns the direct effect of the earthquake on small businesses. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by business establishment size.

  18. By How Much Can Physics-Based Earthquake Simulations Reduce the Uncertainties in Ground Motion Predictions?

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Wang, F.

    2014-12-01

    Probabilistic seismic hazard analysis (PSHA) is the scientific basis for many engineering and social applications: performance-based design, seismic retrofitting, resilience engineering, insurance-rate setting, disaster preparation, emergency response, and public education. The uncertainties in PSHA predictions can be expressed as an aleatory variability that describes the randomness of the earthquake system, conditional on a system representation, and an epistemic uncertainty that characterizes errors in the system representation. Standard PSHA models use empirical ground motion prediction equations (GMPEs) that have a high aleatory variability, primarily because they do not account for the effects of crustal heterogeneities, which scatter seismic wavefields and cause local amplifications in strong ground motions that can exceed an order of magnitude. We show how much this variance can be lowered by simulating seismic wave propagation through 3D crustal models derived from waveform tomography. Our basic analysis tool is the new technique of averaging-based factorization (ABF), which uses a well-specified seismological hierarchy to decompose exactly and uniquely the logarithmic excitation functional into a series of uncorrelated terms that include unbiased averages of the site, path, hypocenter, and source-complexity effects (Feng & Jordan, Bull. Seismol. Soc. Am., 2014, doi:10.1785/0120130263). We apply ABF to characterize the differences in ground motion predictions between the standard GMPEs employed by the National Seismic Hazard Maps and the simulation-based CyberShake hazard model of the Southern California Earthquake Center. The ABF analysis indicates that, at low seismic frequencies (< 1 Hz), CyberShake site and path effects unexplained by the GMPEs account 40-50% of total residual variance. Therefore, accurate earthquake simulations have the potential for reducing the aleatory variance of the strong-motion predictions by about a factor of two, which would

  19. 75 FR 63854 - National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... Geological Survey National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee AGENCY: U.S... Earthquake Prediction Evaluation Council (NEPEC) will hold a 2-day meeting on November 3 and 4, 2010. The... the Director of the U.S. Geological Survey (USGS) on proposed earthquake predictions, on...

  20. Analysing earthquake slip models with the spatial prediction comparison test

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Mai, P. Martin; Thingbaijam, Kiran K. S.; Razafindrakoto, Hoby N. T.; Genton, Marc G.

    2015-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (`model') and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  1. Strong motion PGA prediction for southwestern China from small earthquake records

    NASA Astrophysics Data System (ADS)

    Tao, Zhengru; Tao, Xiaxin; Cui, Anping

    2016-05-01

    For regions without enough strong ground motion records, a seismology-based method is adopted to predict motion PGA (peak ground acceleration) values on rock sites with parameters from small earthquake data, recorded by regional broadband digital monitoring networks. Sichuan and Yunnan regions in southwestern China are selected for this case study. Five regional parameters of source spectrum and attenuation are acquired from a joint inversion by the micro-genetic algorithm. PGAs are predicted for earthquakes with moment magnitude (Mw) 5.0, 6.0, and 7.0 respectively and a series of distances. The result is compared with limited regional strong motion data in the corresponding interval Mw ± 0.5. Most of the results ideally pass through the data clusters, except the case of Mw7.0 in the Sichuan region, which shows an obvious slow attenuation due to a lack of observed data from larger earthquakes (Mw ≥ 7.0). For further application, the parameters are adopted in strong motion synthesis at two near-fault stations during the great Wenchuan Earthquake M8.0 in 2008.

  2. Predictability of population displacement after the 2010 Haiti earthquake.

    PubMed

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-07-17

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people's movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people's movements would have become less predictable. Instead, the predictability of people's trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought.

  3. Predictability of population displacement after the 2010 Haiti earthquake

    PubMed Central

    Lu, Xin; Bengtsson, Linus; Holme, Petter

    2012-01-01

    Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people’s movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people’s movements would have become less predictable. Instead, the predictability of people’s trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought. PMID:22711804

  4. Impact of Channel-like Erosion Patterns on the Frequency-Magnitude Distribution of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Aochi, H.

    2015-12-01

    Reactive flow at depth (either related to underground activities like enhancement of hydrocarbon recovery, CO2 storage or to natural flow like in hydrothermal zones) can alter fractures' topography, which might in turn change their seismic responses. Depending on the flow and reaction rates, instability of the dissolution front can lead to a wormhole-like pronounced erosion pattern (Szymczak & Ladd, JGR, 2009). In a fractal structure of rupture process (Ide & Aochi, JGR, 2005), we question how the perturbation related to well-spaced long channels alters rupture propagation initiated on a weak plane and eventually the statistical feature of rupture appearance in Frequency-Magnitude Distribution FMD (Rohmer & Aochi, GJI, 2015). Contrary to intuition, a spatially uniform dissolution is not the most remarkable case, since it affects all the events proportionally to their sizes leading to a downwards translation of FMD: the slope of FMD (b-value) remains unchanged. An in-depth parametric study was carried out by considering different pattern characteristics: spacing S varying from 0 to 100 and length L from 50 to 800 and fixing the width w=1. The figure shows that there is a region of optimum channels' characteristics for which the b-value of the Gutenberg Richter law is significantly modified with p-value ~10% (corresponding to area with red-coloured boundaries) given spacing to length ratio of the order of ~1/40: large magnitude events are more significantly affected leading to an imbalanced distribution in the magnitude bins of the FMD. The larger the spacing, the lower the channel's influence. The decrease of the b-value between intact and altered fractures can reach values down to -0.08. Besides, a spatial analysis shows that the local seismicity anomaly concentrates in a limited zone around the channels: this opens perspective for detecting these eroded regions through high-resolution imaging surveys.

  5. Tsunami forecasting and warning in the Australian region for the Magnitude 8.8 Chilean Earthquake of 27 February 2010

    NASA Astrophysics Data System (ADS)

    Allen, S. C.; Simanjuntak, A.; Greenslade, D. J.

    2010-12-01

    The Joint Australian Tsunami Warning Centre (JATWC) is responsible for issuing tsunami warnings within the Australian region. To a large extent, these are based on numerical guidance provided by the T2 tsunami scenario database, which has recently been implemented for operational use within the JATWC. During an event, the closest T2 scenario is selected and modelled tsunami amplitude values near the Australian coastline from that scenario are used as a proxy for impact in order to derive an appropriate level of warning. The Chilean earthquake of 27 February 2010 and the associated tsunami were locally devastating and resulted in the issuance of public warnings of possible tsunami impact on a ocean-wide scale, including for a large part of the Australian coastline. In this presentation we will evaluate the application of T2 and the resulting tsunami warnings in the Australian region for this event. This evaluation will include comparisons with sea-level observations and assessment of the tsunami forecast. Hindsight knowledge shows that the actual earthquake rupture of the event was quite different to the pre-computed ruptures within the T2 scenario database for events of this magnitude. Alternative tsunami model simulations are therefore computed, with ruptures more closely resembling the actual event. The resulting tsunami forecasts and warnings will be examined to assess the implications for tsunami warnings in the Australian region.

  6. Determination of fault planes and dimensions for low-magnitude earthquakes - A case study in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Mozziconacci, Laetitia; Delouis, Bertrand; Huang, Bor-Shouh

    2017-03-01

    We present a modified version of the FMNEAR method for determining the focal mechanisms and fault plane geometries of small earthquakes. Our improvements allow determination of the fault plane and dimensions using the near-field components of only a few local records. The limiting factor is the number of stations: a minimum of five to six stations is required to discriminate between the fault plane and auxiliary plane. This limitation corresponds to events with magnitudes ML > 3.5 in eastern Taiwan, but strongly depends on station coverage in the study area. Once a fault plane is identified, it is provided along with its source time function and fault slip distribution. The proposed approach is validated by synthetic tests, and applied to real cases from a seismic crisis that occurred in the Longitudinal Valley of eastern Taiwan in April 2006. The fault geometries and faulting types of test events closely match the fault system of the main shock and reveal a minor one inside the faults zone of the Longitudinal Valley. Tested on a larger scale, this approach enables the fault geometries of main and secondary fault systems to be recovered from small earthquakes, allowing subsurface faults to be mapped in detail without waiting for a large, damaging event.

  7. Non-extensive statistical physics applied to heat flow and the earthquake frequency-magnitude distribution in Greece

    NASA Astrophysics Data System (ADS)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2016-08-01

    This study investigates seismicity in Greece and its relation to heat flow, based on the science of complex systems. Greece is characterised by a complex tectonic setting, which is represented mainly by active subduction, lithospheric extension and volcanism. The non-extensive statistical physics formalism is a generalisation of Boltzmann-Gibbs statistical physics and has been successfully used for the analysis of a variety of complex systems, where fractality and long-range interactions are important. Consequently, in this study, the frequency-magnitude distribution analysis was performed in a non-extensive statistical physics context, and the non-extensive parameter, qM, which is related to the frequency-magnitude distribution, was used as an index of the physical state of the studied area. Examination of the spatial distribution of qM revealed its relation to the spatial distribution of seismicity during the period 1976-2009. For focal depths ≤40 km, we observe that strong earthquakes coincide with high qM values. In addition, heat flow anomalies in Greece are known to be strongly related to crustal thickness; a thin crust and significant heat flow anomalies characterise the central Aegean region. Moreover, the data studied indicate that high heat flow is consistent with the absence of strong events and consequently with low qM values (high b-values) in the central Aegean region and around the volcanic arc. However, the eastern part of the volcanic arc exhibits strong earthquakes and high qM values whereas low qM values are found along the North Aegean Trough and southwest of Crete, despite the fact that strong events are present during the period 1976-2009 in both areas.

  8. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    USGS Publications Warehouse

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun

    2012-01-01

    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  9. Implications for prediction and hazard assessment from the 2004 Parkfield earthquake.

    PubMed

    Bakun, W H; Aagaard, B; Dost, B; Ellsworth, W L; Hardebeck, J L; Harris, R A; Ji, C; Johnston, M J S; Langbein, J; Lienkaemper, J J; Michael, A J; Murray, J R; Nadeau, R M; Reasenberg, P A; Reichle, M S; Roeloffs, E A; Shakal, A; Simpson, R W; Waldhauser, F

    2005-10-13

    Obtaining high-quality measurements close to a large earthquake is not easy: one has to be in the right place at the right time with the right instruments. Such a convergence happened, for the first time, when the 28 September 2004 Parkfield, California, earthquake occurred on the San Andreas fault in the middle of a dense network of instruments designed to record it. The resulting data reveal aspects of the earthquake process never before seen. Here we show what these data, when combined with data from earlier Parkfield earthquakes, tell us about earthquake physics and earthquake prediction. The 2004 Parkfield earthquake, with its lack of obvious precursors, demonstrates that reliable short-term earthquake prediction still is not achievable. To reduce the societal impact of earthquakes now, we should focus on developing the next generation of models that can provide better predictions of the strength and location of damaging ground shaking.

  10. Magnitudes and locations of the 1811-1812 New Madrid, Missouri, and the 1886 Charleston, South Carolina, earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Hopper, M.G.

    2004-01-01

    We estimate locations and moment magnitudes M and their uncertainties for the three largest events in the 1811-1812 sequence near New Madrid, Missouri, and for the 1 September 1886 event near Charleston, South Carolina. The intensity magnitude M1, our preferred estimate of M, is 7.6 for the 16 December 1811 event that occurred in the New Madrid seismic zone (NMSZ) on the Bootheel lineament or on the Blytheville seismic zone. M1, is 7.5 for the 23 January 1812 event for a location on the New Madrid north zone of the NMSZ and 7.8 for the 7 February 1812 event that occurred on the Reelfoot blind thrust of the NMSZ. Our preferred locations for these events are located on those NMSZ segments preferred by Johnston and Schweig (1996). Our estimates of M are 0.1-0.4 M units less than those of Johnston (1996b) and 0.3-0.5 M units greater than those of Hough et al. (2000). M1 is 6.9 for the 1 September 1886 event for a location at the Summerville-Middleton Place cluster of recent small earthquakes located about 30 km northwest of Charleston.

  11. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Labor Market Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of economic Super Sectors in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each Super Sector to each Instrumental Intensity level. The analysis concerns the direct effect of the scenario earthquake on economic sectors and provides a baseline for the indirect and interactive analysis of an input-output model of the regional economy. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by the North American Industry Classification System. According to the analysis results, nearly 225,000 business

  12. Evidence for the recurrence of large-magnitude earthquakes along the Makran coast of Iran and Pakistan

    USGS Publications Warehouse

    Page, W.D.; Alt, J.N.; Cluff, L.S.; Plafker, G.

    1979-01-01

    The presence of raised beaches and marine terraces along the Makran coast indicates episodic uplift of the continental margin resulting from large-magnitude earthquakes. The uplift occurs as incremental steps similar in height to the 1-3 m of measured uplift resulting from the November 28, 1945 (M 8.3) earthquake at Pasni and Ormara, Pakistan. The data support an E-W-trending, active subduction zone off the Makran coast. The raised beaches and wave-cut terraces along the Makran coast are extensive with some terraces 1-2 km wide, 10-15 m long and up to 500 m in elevation. The terraces are generally capped with shelly sandstones 0.5-5 m thick. Wave-cut cliffs, notches, and associated boulder breccia and swash troughs are locally preserved. Raised Holocene accretion beaches, lagoonal deposits, and tombolos are found up to 10 m in elevation. The number and elevation of raised wave-cut terraces along the Makran coast increase eastward from one at Jask, the entrance to the Persian Gulf, at a few meters elevation, to nine at Konarak, 250 km to the east. Multiple terraces are found on the prominent headlands as far east as Karachi. The wave-cut terraces are locally tilted and cut by faults with a few meters of displacement. Long-term, average rates of uplift were calculated from present elevation, estimated elevation at time of deposition, and 14C and U-Th dates obtained on shells. Uplift rates in centimeters per year at various locations from west to east are as follows: Jask, 0 (post-Sangamon); Konarak, 0.031-0.2 (Holocene), 0.01 (post-Sangamon); Ormara 0.2 (Holocene). ?? 1979.

  13. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    NASA Astrophysics Data System (ADS)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  14. Effects of magnitude and magnitude predictability of postural perturbations on preparatory cortical activity in older adults with and without Parkinson's disease.

    PubMed

    Smith, Beth A; Jacobs, Jesse V; Horak, Fay B

    2012-10-01

    The goal of this study was to identify whether impaired cortical preparation may relate to impaired scaling of postural responses of people with Parkinson's disease (PD). We hypothesized that impaired scaling of postural responses in participants with PD would be associated with impaired set-dependent cortical activity in preparation for perturbations of predictable magnitudes. Participants performed postural responses to backward surface translations. We examined the effects of perturbation magnitude (predictable small vs. predictable large) and predictability of magnitude (predictable vs. unpredictable-in-magnitude) on postural responses (center-of-pressure (CoP) displacements) and on preparatory electroencephalographic (EEG) measures of contingent negative variation (CNV) and alpha and beta event-related desynchronization (ERD). Our results showed that unpredictability of perturbation magnitude, but not the magnitude of the perturbation itself, was associated with increased CNV amplitude at the CZ electrode in both groups. While control participants scaled their postural responses to the predicted magnitude of the perturbation, their condition-related changes in CoP displacements were not correlated with condition-related changes in EEG preparatory activity (CNV or ERD). In contrast, participants with PD did not scale their postural responses to the predicted magnitude of the perturbation, but they did demonstrate greater beta ERD in the condition of predictably small-magnitude perturbations and greater beta ERD than the control participants at the CZ electrode. In addition, increased beta ERD in PD was associated with decreased adaptability of postural responses, suggesting that preparatory cortical activity may have a more direct influence on postural response scaling for people with PD than for control participants.

  15. Simulation of Parallel Interacting Faults and Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Mora, P.; Weatherley, D.; Klein, B.

    2003-04-01

    Numerical shear experiments of a granular region using the lattice solid model often exhibit accelerating energy release in the lead-up to large events (Mora et al, 2000) and a growth in correlation lengths in the stress field (Mora and Place, 2002). While these results provide evidence for a Critical Point-like mechanism in elasto-dynamic systems and the possibility of earthquake forecasting but they do not prove such a mechanism occurs in the crust. Cellular automaton models simulations exhibit accelerating energy release prior to large events or unpredictable behaviour in which large events may occur at any time depending on tuning parameters such as dissipation ratio and stress transfer ratio (Weatherley and Mora, 2003). The mean stress plots from the particle simulations are most similar to the CA mean stress plots near the boundary of the predictable and unpredictable regimes suggesting that elasto-dynamic systems may be close to the borderline of predictable and unpredictable. To progress in resolving the question of whether more realistic fault system models exhibit predictable behaviour and to determine whether they also have an unpredictable and predictable regime depending on tuning parameters like that seen in CA simulations, we developed a 2D elasto-dynamic model of parallel interacting faults. The friction is slip weakening until a critical slip distance. Henceforth, the friction is at the dynamic value until the slip rate drops below the value it attained when the critical slip distance was exceeded. As the slip rate continues to drop, the friction increases back to the static value as a function of slip rate. Numerical shear experiments are conducted in a model with 41 parallel interacting faults. Calculations of the inverse metric defined in Klein et al (2000) indicate that the system is non-ergodic. Furthermore, by calculating the correllation between the stress fields at different times we determine that the system exhibits so called ``glassy

  16. Evolving magnitude-frequency distributions during the Guy-Greenbrier (2010-11) induced earthquake sequence: Insights into the physical mechanisms of b-value shifts and large-magnitude curvature

    NASA Astrophysics Data System (ADS)

    Dempsey, D.; Suckale, J.; Huang, Y.

    2015-12-01

    In 2010-11, a sequence of earthquakes occurred on an unmapped basement fault near Guy, Arkansas. The events are likely to have been triggered by a nine month period of wastewater disposal during which 4.5x105 m2 of water was injected at two nearby wells. Magnitude-frequency distributions (MFD) for the induced sequence show two interesting properties: (i) a low Gutenberg-Richter (GR) b-value of ~0.8 during injection, increasing to 1.0 post-injection (ii) and downward curvature of the MFD at the upper magnitude limit. We use a coupled model of injection-triggering and earthquake rupture to show how the evolving MFD can be understood in terms of an effective stress increase on the fault, which arises from overpressuring and strength reduction. Reservoir simulation is used to model injection into a horizontally extensive aquifer that overlies an impermeable basement containing a single permeable fault. Earthquake triggering occurs when the static strength, reduced by the modeled pressure increase, satisfies a Mohr-Coulomb criterion. Pressure evolution is also incorporated in a model of fault rupture, which is based on an expanding bilateral crack approximation to quasidynamic rupture propagation and static/dynamic friction evolution. An earthquake sequence is constructed as an ensemble of triggered ruptures for many realizations of a heterogeneous fractal stress distribution. During injection, there is a steady rise in fluid pressure on the fault. In addition to its role in triggering earthquakes, rising pressure affects the rupture process by reducing the dynamic strength relative to fault shear stress; this is equivalent to tectonic stress increase in natural seismicity. As mean stress increases, larger events are more frequent and this is reflected in a lower b-value. The largest events, however, occur late in the loading cycle at very high stress; their absence in the early stages of injection manifests as downward curvature in the MFD at large magnitudes.

  17. A theoretical study of correlation between scaled energy and earthquake magnitude based on two source displacement models

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2013-12-01

    The correlation of the scaled energy, ê = E s/ M 0, versus earthquake magnitude, M s, is studied based on two models: (1) Model 1 based on the use of the time function of the average displacements, with a ω -2 source spectrum, across a fault plane; and (2) Model 2 based on the use of the time function of the average displacements, with a ω -3 source spectrum, across a fault plane. For the second model, there are two cases: (a) As τ ≒ T, where τ is the rise time and T the rupture time, lg( ê) ~ - M s; and (b) As τ ≪ T, lg( ê) ~ -(1/2) M s. The second model leads to a negative value of ê. This means that Model 2 cannot work for studying the present problem. The results obtained from Model 1 suggest that the source model is a factor, yet not a unique one, in controlling the correlation of ê versus M s.

  18. Spatial variations in the frequency-magnitude distribution of earthquakes at Soufriere Hills Volcano, Montserrat, West Indies

    USGS Publications Warehouse

    Power, J.A.; Wyss, M.; Latchman, J.L.

    1998-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is determined as a function of space beneath Soufriere Hills Volcano, Montserrat, from data recorded between August 1, 1995 and March 31, 1996. A volume of anomalously high b-values (b > 3.0) with a 1.5 km radius is imaged at depths of 0 and 1.5 km beneath English's Crater and Chance's Peak. This high b-value anomaly extends southwest to Gage's Soufriere. At depths greater than 2.5 km volumes of comparatively low b-values (b-1) are found beneath St. George's Hill, Windy Hill, and below 2.5 km depth and to the south of English's Crater. We speculate the depth of high b-value anomalies under volcanoes may be a function of silica content, modified by some additional factors, with the most siliceous having these volumes that are highly fractured or contain high pore pressure at the shallowest depths. Copyright 1998 by the American Geophysical Union.

  19. Possibility of Earthquake-prediction by analyzing VLF signals

    NASA Astrophysics Data System (ADS)

    Ray, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta

    2016-07-01

    Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

  20. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  1. Earthquakes

    MedlinePlus

    ... Thunderstorms & Lightning Tornadoes Tsunamis Volcanoes Wildfires Main Content Earthquakes Earthquakes are sudden rolling or shaking events caused ... at any time of the year. Before An Earthquake Look around places where you spend time. Identify ...

  2. Recent development of the earthquake strong motion-intensity catalog and intensity prediction equations for Iran

    NASA Astrophysics Data System (ADS)

    Zare, Mehdi

    2016-12-01

    This study aims to develop a new earthquake strong motion-intensity catalog as well as intensity prediction equations for Iran based on the available data. For this purpose, all the sites which had both recorded strong motion and intensity values throughout the region were first searched. Then, the data belonging to the 306 identified sites were processed, and the results were compiled as a new strong motion-intensity catalog. Based on this new catalog, two empirical equations between the values of intensity and the ground motion parameters (GMPs) for the Iranian earthquakes were calculated. At the first step, earthquake "intensity" was considered as a function of five independent GMPs including "Log (PHA)," "moment magnitude (MW)," "distance to epicenter," "site type," and "duration," and a multiple stepwise regression was calculated. Regarding the correlations between the parameters and the effectiveness coefficients of the predictors, the Log (PHA) was recognized as the most effective parameter on the earthquake "intensity," while the parameter "site type" was removed from the equations since it was determines as the least significant variable. Then, at the second step, a simple ordinary least squares (OLS) regression was fitted only between the parameters intensity and the Log (PHA) which resulted in more over/underestimated intensity values comparing to the results of the multiple intensity-GMPs regression. However, for rapid response purposes, the simple OLS regression may be more useful comparing to the multiple regression due to its data availability and simplicity. In addition, according to 50 selected earthquakes, an empirical relation between the macroseismic intensity (I0) and MW was developed.

  3. Ground motion prediction and earthquake scenarios in the volcanic region of Mt. Etna (Southern Italy

    NASA Astrophysics Data System (ADS)

    Langer, Horst; Tusa, Giuseppina; Luciano, Scarfi; Azzaro, Raffaela

    2013-04-01

    One of the principal issues in the assessment of seismic hazard is the prediction of relevant ground motion parameters, e. g., peak ground acceleration, radiated seismic energy, response spectra, at some distance from the source. Here we first present ground motion prediction equations (GMPE) for horizontal components for the area of Mt. Etna and adjacent zones. Our analysis is based on 4878 three component seismograms related to 129 seismic events with local magnitudes ranging from 3.0 to 4.8, hypocentral distances up to 200 km, and focal depth shallower than 30 km. Accounting for the specific seismotectonic and geological conditions of the considered area we have divided our data set into three sub-groups: (i) Shallow Mt. Etna Events (SEE), i.e., typically volcano-tectonic events in the area of Mt. Etna having a focal depth less than 5 km; (ii) Deep Mt. Etna Events (DEE), i.e., events in the volcanic region, but with a depth greater than 5 km; (iii) Extra Mt. Etna Events (EEE), i.e., purely tectonic events falling outside the area of Mt. Etna. The predicted PGAs for the SEE are lower than those predicted for the DEE and the EEE, reflecting their lower high-frequency energy content. We explain this observation as due to the lower stress drops. The attenuation relationships are compared to the ones most commonly used, such as by Sabetta and Pugliese (1987)for Italy, or Ambraseys et al. (1996) for Europe. Whereas our GMPEs are based on small earthquakes, the magnitudes covered by the two above mentioned attenuation relationships regard moderate to large magnitudes (up to 6.8 and 7.9, respectively). We show that the extrapolation of our GMPEs to magnitues beyond the range covered by the data is misleading; at the same time also the afore mentioned relationships fail to predict ground motion parameters for our data set. Despite of these discrepancies, we can exploit our data for setting up scenarios for strong earthquakes for which no instrumental recordings are

  4. Calibration of the landsliding numerical model SLIPOS and prediction of the seismically induced erosion for several large earthquakes scenarios

    NASA Astrophysics Data System (ADS)

    Jeandet, Louise; Lague, Dimitri; Steer, Philippe; Davy, Philippe; Quigley, Mark

    2016-04-01

    Coseismic landsliding is an important contributor to the long-term erosion of mountain belts. But if the scaling between earthquakes magnitude and volume of sediments eroded is well known, the understanding of geomorphic consequences as divide migration or valley infilling still poorly understood. Then, the prediction of the location of landslides sources and deposits is a challenging issue. To progress in this topic, algorithms that resolves correctly the interaction between landsliding and ground shaking are needed. Peak Ground Acceleration (PGA) have been shown to control at first order the landslide density. But it can trigger landslides by two mechanisms: the direct effect of seismic acceleration on forces balance, and a transient decrease in hillslope strength parameters. The relative importance of both effects on slope stability is not well understood. We use SLIPOS, an algorithm of bedrock landsliding based on a simple stability analysis applied at local scale. The model is capable to reproduce the Area/Volume scaling and area distribution of natural landslides. We aim to include the effects of earthquakes in SLIPOS by simulating the PGA effect via a spatially variable cohesion decrease. We run the model (i) on the Mw 7.6 Chi-Chi earthquake (1999) to quantitatively test the accuracy of the predictions and (ii) on earthquakes scenarios (Mw 6.5 to 8) on the New-Zealand Alpine fault to infer the volume of landslides associated with large events. For the Chi-Chi earthquake, we predict the observed total landslides area within a factor of 2. Moreover, we show with the New-Zealand fault case that the simulation of ground acceleration by cohesion decrease lead to a realistic scaling between the volume of sediments and the earthquake magnitude.

  5. Raising the science awareness of first year undergraduate students via an earthquake prediction seminar

    NASA Astrophysics Data System (ADS)

    Gilstrap, T. D.

    2011-12-01

    The public is fascinated with and fearful of natural hazards such as earthquakes. After every major earthquake there is a surge of interest in earthquake science and earthquake prediction. Yet many people do not understand the challenges of earthquake prediction and the need to fund earthquake research. An earthquake prediction seminar is offered to first year undergraduate students to improve their understanding of why earthquakes happen, how earthquake research is done and more specifically why it is so challenging to issue short-term earthquake prediction. Some of these students may become scientists but most will not. For the majority this is an opportunity to learn how science research works and how it is related to policy and society. The seminar is seven weeks long, two hours per week and has been taught every year for the last four years. The material is presented conceptually; there is very little quantitative work involved. The class starts with a field trip to the Randolph College Seismic Station where students learn about seismographs and the different types of seismic waves. Students are then provided with basic background on earthquakes. They learn how to pick arrival times using real seismograms, how to use earthquake catalogues, how to predict the arrival of an earthquake wave at any location on Earth. Next they learn about long, intermediate, short and real time earthquake prediction. Discussions are an essential part of the seminar. Students are challenged to draw their own conclusions on the pros and cons of earthquake prediction. Time is designated to discuss the political and economic impact of earthquake prediction. At the end of the seven weeks students are required to write a paper and discuss the need for earthquake prediction. The class is not focused on the science but rather the links between the science issues and their economical and political impact. Weekly homework assignments are used to aid and assess students' learning. Pre and

  6. The 26 May 2006 magnitude 6.4 Yogyakarta earthquake south of Mt. Merapi volcano: Did lahar deposits amplify ground shaking and thus lead to the disaster?

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Wang, R.; Luehr, B.-G.; Wassermann, J.; Behr, Y.; Parolai, S.; Anggraini, A.; Günther, E.; Sobiesiak, M.; Grosser, H.; Wetzel, H.-U.; Milkereit, C.; Sri Brotopuspito, P. J. K.; Harjadi, P.; Zschau, J.

    2008-05-01

    Indonesia is repeatedly unsettled by severe volcano- and earthquake-related disasters, which are geologically coupled to the 5-7 cm/a tectonic convergence of the Australian plate beneath the Sunda Plate. On Saturday, 26 May 2006, the southern coast of central Java was struck by an earthquake at 2254 UTC in the Sultanate Yogyakarta. Although the magnitude reached only M w = 6.4, it left more than 6,000 fatalities and up to 1,000,000 homeless. The main disaster area was south of Mt. Merapi Volcano, located within a narrow topographic and structural depression along the Opak River. The earthquake disaster area within the depression is underlain by thick volcaniclastic deposits commonly derived in the form of lahars from Mt. Merapi Volcano, which had a major influence leading to the disaster. In order to more precisely understand this earthquake and its consequences, a 3-month aftershock measurement campaign was performed from May to August 2006. We here present the first location results, which suggest that the Yogyakarta earthquake occurred at 10-20 km distance east of the disaster area, outside of the topographic depression. Using simple model calculations taking material heterogeneity into account we illustrate how soft volcaniclastic deposits may locally amplify ground shaking at distance. As the high degree of observed damage may have been augmented by the seismic response of the volcaniclastic Mt. Merapi deposits, this work implies that the volcano had an indirect effect on the level of earthquake destruction.

  7. Earthquake forecasting and warning

    SciTech Connect

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  8. Large magnitude (M > 7.5) offshore earthquakes in 2012: few examples of absent or little tsunamigenesis, with implications for tsunami early warning

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano

    2013-04-01

    We take into account some examples of offshore earthquakes occurred worldwide in year 2012 that were characterised by a "large" magnitude (Mw equal or larger than 7.5) but which produced no or little tsunami effects. Here, "little" is intended as "lower than expected on the basis of the parent earthquake magnitude". The examples we analyse include three earthquakes occurred along the Pacific coasts of Central America (20 March, Mw=7.8, Mexico; 5 September, Mw=7.6, Costa Rica; 7 November, Mw=7.5, Mexico), the Mw=7.6 and Mw=7.7 earthquakes occurred respectively on 31 August and 28 October offshore Philippines and offshore Alaska, and the two Indian Ocean earthquakes registered on a single day (11 April) and characterised by Mw=8.6 and Mw=8.2. For each event, we try to face the problem related to its tsunamigenic potential from two different perspectives. The first can be considered purely scientific and coincides with the question: why was the ensuing tsunami so weak? The answer can be related partly to the particular tectonic setting in the source area, partly to the particular position of the source with respect to the coastline, and finally to the focal mechanism of the earthquake and to the slip distribution on the ruptured fault. The first two pieces of information are available soon after the earthquake occurrence, while the third requires time periods in the order of tens of minutes. The second perspective is more "operational" and coincides with the tsunami early warning perspective, for which the question is: will the earthquake generate a significant tsunami and if so, where will it strike? The Indian Ocean events of 11 April 2012 are perfect examples of the fact that the information on the earthquake magnitude and position alone may not be sufficient to produce reliable tsunami warnings. We emphasise that it is of utmost importance that the focal mechanism determination is obtained in the future much more quickly than it is at present and that this

  9. Network of seismo-geochemical monitoring observatories for earthquake prediction research in India

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Hirok; Barman, Chiranjib; Iyengar, A.; Ghose, Debasis; Sen, Prasanta; Sinha, Bikash

    2013-08-01

    Present paper deals with a brief review of the research carried out to develop multi-parametric gas-geochemical monitoring facilities dedicated to earthquake prediction research in India by installing a network of seismo-geochemical monitoring observatories at different regions of the country. In an attempt to detect earthquake precursors, the concentrations of helium, argon, nitrogen, methane, radon-222 (222Rn), polonium-218 (218Po), and polonium-214 (214Po) emanating from hydrothermal systems are monitored continuously and round the clock at these observatories. In this paper, we make a cross correlation study of a number of geochemical anomalies recorded at these observatories. With the data received from each of the above observatories we attempt to make a time series analysis to relate magnitude and epicentral distance locations through statistical methods, empirical formulations that relate the area of influence to earthquake scale. Application of the linear and nonlinear statistical techniques in the recorded geochemical data sets reveal a clear signature of long-range correlation in the data sets.

  10. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  11. Large-magnitude, late Holocene earthquakes on the Genoa fault, West-Central Nevada and Eastern California

    USGS Publications Warehouse

    Ramelli, A.R.; Bell, J.W.; DePolo, C.M.; Yount, J.C.

    1999-01-01

    The Genoa fault, a principal normal fault of the transition zone between the Basin and Range Province and the northern Sierra Nevada, displays a large and conspicuous prehistoric scarp. Three trenches excavated across this scarp exposed two large-displacement, late Holocene events. Two of the trenches contained multiple layers of stratified charcoal, yielding radiocarbon ages suggesting the most recent and penultimate events on the main part of the fault occurred 500-600 cal B.P., and 2000-2200 cal B.P., respectively. Normal-slip offsets of 3-5.5 m per event along much of the rupture length are comparable to the largest historical Basin and Range Province earthquakes, suggesting these paleoearthquakes were on the order of magnitude 7.2-7.5. The apparent late Holocene slip rate (2-3 mm/yr) is one of the highest in the Basin and Range Province. Based on structural and behavioral differences, the Genoa fault is here divided into four principal sections (the Sierra, Diamond Valley, Carson Valley, and Jacks Valley sections) and is distinguished from three northeast-striking faults in the Carson City area (the Kings Canyon, Carson City, and Indian Hill faults). The conspicuous scarp extends for nearly 25 km, the combined length of the Carson Valley and Jacks Valley sections. The Diamond Valley section lacks the conspicuous scarp, and older alluvial fans and bedrock outcrops on the downthrown side of the fault indicate a lower activity rate. Activity further decreases to the south along the Sierra section, which consists of numerous distributed faults. All three northeast-striking faults in the Carson City area ruptured within the past few thousand years, and one or more may have ruptured during recent events on the Genoa fault.

  12. Prediction model of earthquake with the identification of earthquake source polarity mechanism through the focal classification using ANFIS and PCA technique

    NASA Astrophysics Data System (ADS)

    Setyonegoro, W.

    2016-05-01

    Incidence of earthquake disaster has caused casualties and material in considerable amounts. This research has purposes to predictability the return period of earthquake with the identification of the mechanism of earthquake which in case study area in Sumatra. To predict earthquakes which training data of the historical earthquake is using ANFIS technique. In this technique the historical data set compiled into intervals of earthquake occurrence daily average in a year. Output to be obtained is a model return period earthquake events daily average in a year. Return period earthquake occurrence models that have been learning by ANFIS, then performed the polarity recognition through image recognition techniques on the focal sphere using principal component analysis PCA method. The results, model predicted a return period earthquake events for the average monthly return period showed a correlation coefficient 0.014562.

  13. Individual differences in electrophysiological responses to performance feedback predict AB magnitude.

    PubMed

    MaClean, Mary H; Arnell, Karen M

    2013-06-01

    The attentional blink (AB) is observed when report accuracy for a second target (T2) is reduced if T2 is presented within approximately 500 ms of a first target (T1), but accuracy is relatively unimpaired at longer T1-T2 separations. The AB is thought to represent a transient cost of attending to a target, and reliable individual differences have been observed in its magnitude. Some models of the AB have suggested that cognitive control contributes to production of the AB, such that greater cognitive control is associated with larger AB magnitudes. Performance-monitoring functions are thought to modulate the strength of cognitive control, and those functions are indexed by event-related potentials in response to both endogenous and exogenous performance evaluation. Here we examined whether individual differences in the amplitudes to internal and external response feedback predict individual AB magnitudes. We found that electrophysiological responses to externally provided performance feedback, measured in two different tasks, did predict individual differences in AB magnitude, such that greater feedback-related N2 amplitudes were associated with larger AB magnitudes, regardless of the valence of the feedback.

  14. The Ordered Network Structure and Prediction Summary for M≥7 Earthquakes in Xinjiang Region of China

    NASA Astrophysics Data System (ADS)

    Men, Ke-Pei; Zhao, Kai

    2014-12-01

    M ≥7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a × k (k = 1,2,3), 11 ~ 12 a, 41 ~ 43 a, 18 ~ 19 a, and 5 ~ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019 - 2020 and 2025 - 2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  15. The 1170 and 1202 CE Dead Sea Rift earthquakes and long-term magnitude distribution of the Dead Sea Fault zone

    USGS Publications Warehouse

    Hough, S.E.; Avni, R.

    2009-01-01

    In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.

  16. Update of the Graizer-Kalkan ground-motion prediction equations for shallow crustal continental earthquakes

    USGS Publications Warehouse

    Graizer, Vladimir; Kalkan, Erol

    2015-01-01

    A ground-motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration and 5-percent damped pseudo spectral acceleration response ordinates of maximum horizontal component of randomly oriented ground motions was developed by Graizer and Kalkan (2007, 2009) to be used for seismic hazard analyses and engineering applications. This GMPE was derived from the greatly expanded Next Generation of Attenuation (NGA)-West1 database. In this study, Graizer and Kalkan’s GMPE is revised to include (1) an anelastic attenuation term as a function of quality factor (Q0) in order to capture regional differences in large-distance attenuation and (2) a new frequency-dependent sedimentary-basin scaling term as a function of depth to the 1.5-km/s shear-wave velocity isosurface to improve ground-motion predictions for sites on deep sedimentary basins. The new model (GK15), developed to be simple, is applicable to the western United States and other regions with shallow continental crust in active tectonic environments and may be used for earthquakes with moment magnitudes 5.0–8.0, distances 0–250 km, average shear-wave velocities 200–1,300 m/s, and spectral periods 0.01–5 s. Directivity effects are not explicitly modeled but are included through the variability of the data. Our aleatory variability model captures inter-event variability, which decreases with magnitude and increases with distance. The mixed-effects residuals analysis shows that the GK15 reveals no trend with respect to the independent parameters. The GK15 is a significant improvement over Graizer and Kalkan (2007, 2009), and provides a demonstrable, reliable description of ground-motion amplitudes recorded from shallow crustal earthquakes in active tectonic regions over a wide range of magnitudes, distances, and site conditions.

  17. Numerical shake prediction for Earthquake Early Warning: data assimilation, real-time shake-mapping, and simulation of wave propagation

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Aoki, S.

    2014-12-01

    In many methods of the present Earthquake Early Warning (EEW) systems, hypocenter and magnitude are determined quickly and then strengths of ground motions are predicted. The 2011 Tohoku Earthquake (MW9.0), however, revealed some technical issues with the conventional methods: under-prediction due to the large extent of the fault rupture, and over-prediction due to confusion of the system by multiple aftershocks occurred simultaneously. To address these issues, a new concept is proposed for EEW: applying data assimilation technique, present wavefield is estimated precisely in real time (real-time shake mapping) and then future wavefield is predicted time-evolutionally using physical process of seismic wave propagation. Information of hypocenter location and magnitude are not required, which is basically different from the conventional method. In the proposed method, data assimilation technique is applied to estimate the current spatial distribution of wavefield, in which not only actual observation but also anticipated wavefield predicted from one time-step before are used. Real-time application of the data assimilation technique enables us to estimate wavefield in real time, which corresponds to real-time shake mapping. Once present situation is estimated precisely, we go forward to the prediction of future situation using simulation of wave propagation. The proposed method is applied to the 2011 Tohoku Earthquake (MW9.0) and the 2004 Mid-Niigata earthquake (Mw6.7). Future wavefield is precisely predicted, and the prediction is improved with shortening the lead time: for example, the error of 10 s prediction is smaller than that of 20 s, and that of 5 s is much smaller. By introducing this method, it becomes possible to predict ground motion precisely even for cases of the large extent of fault rupture and the multiple simultaneous earthquakes. The proposed method is based on a simulation of physical process from the precisely estimated present condition. This

  18. Earthquake prediction research at the Seismological Laboratory, California Institute of Technology

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquake prediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below. 

  19. Earthquakes

    MedlinePlus

    ... and Cleanup Workers Hurricanes PSAs ASL Videos: Hurricanes Landslides & Mudslides Lightning Lightning Safety Tips First Aid Recommendations ... Disasters & Severe Weather Earthquakes Extreme Heat Floods Hurricanes Landslides Tornadoes Tsunamis Volcanoes Wildfires Winter Weather Earthquakes Language: ...

  20. Improved instrumental magnitude prediction expected from version 2 of the NASA SKY2000 master star catalog

    NASA Technical Reports Server (NTRS)

    Sande, C. B.; Brasoveanu, D.; Miller, A. C.; Home, A. T.; Tracewell, D. A.; Warren, W. H., Jr.

    1998-01-01

    The SKY2000 Master Star Catalog (MC), Version 2 and its predecessors have been designed to provide the basic astronomical input data needed for satellite acquisition and attitude determination on NASA spacecraft. Stellar positions and proper motions are the primary MC data required for operations support followed closely by the stellar brightness observed in various standard astronomical passbands. The instrumental red-magnitude prediction subsystem (REDMAG) in the MMSCAT software package computes the expected instrumental color index (CI) [sensor color correction] from an observed astronomical stellar magnitude in the MC and the characteristics of the stellar spectrum, astronomical passband, and sensor sensitivity curve. The computation is more error prone the greater the mismatch of the sensor sensitivity curve characteristics and those of the observed astronomical passbands. This paper presents the preliminary performance analysis of a typical red-sensitive CCDST during acquisition of sensor data from the two Ball CT-601 ST's onboard the Rossi X-Ray Timing Explorer (RXTE). A comparison is made of relative star positions measured in the ST FOV coordinate system with the expected results computed from the recently released Tycho Catalogue. The comparison is repeated for a group of observed stars with nearby, bright neighbors in order to determine the tracker behavior in the presence of an interfering, near neighbor (NN). The results of this analysis will be used to help define a new photoelectric photometric instrumental sensor magnitude system (S) that is based on several thousand bright star magnitudes observed with the PXTE ST's. This new system will be implemented in Version 2 of the SKY2000 MC to provide improved predicted magnitudes in the mission run catalogs.

  1. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    SciTech Connect

    Takeda, Fumihide; Takeo, Makoto

    2004-12-09

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area.

  2. Validation of a ground motion synthesis and prediction methodology for the 1988, M=6.0, Saguenay Earthquake

    SciTech Connect

    Hutchings, L.; Jarpe, S.; Kasameyer, P.; Foxall, W.

    1998-01-01

    We model the 1988, M=6.0, Saguenay earthquake. We utilize an approach that has been developed to predict strong ground motion. this approach involves developing a set of rupture scenarios based upon bounds on rupture parameters. rupture parameters include rupture geometry, hypocenter, rupture roughness, rupture velocity, healing velocity (rise times), slip distribution, asperity size and location, and slip vector. Scenario here refers to specific values of these parameters for an hypothesized earthquake. Synthetic strong ground motion are then generated for each rupture scenario. A sufficient number of scenarios are run to span the variability in strong ground motion due to the source uncertainties. By having a suite of rupture scenarios of hazardous earthquakes for a fixed magnitude and identifying the hazard to the site from the one standard deviation value of engineering parameters we have introduced a probabilistic component to the deterministic hazard calculation, For this study we developed bounds on rupture scenarios from previous research on this earthquake. The time history closest to the observed ground motion was selected as a model for the Saguenay earthquake.

  3. Pipeline experiment co-located with USGS Parkfield earthquake prediction project

    SciTech Connect

    Isenberg, J.; Richardson, E.

    1995-12-31

    A field experiment to investigate the response of buried pipelines to lateral offsets and traveling waves has been operational since June, 1988 at the Owens` Pasture site near Parkfield, CA where the US Geological Survey has predicted a M6 earthquake. Although the predicted earthquake has not yet occurred, the 1989 Loma Prieta earthquake and 1992 M4.7 earthquake near Parkfield produced measurable response at the pipeline experiment. The present paper describes upgrades to the experiment which were introduced after Loma Prieta which performed successfully in the 1992 event.

  4. Earthquake Forecasting Methodology Catalogue - A collection and comparison of the state-of-the-art in earthquake forecasting and prediction methodologies

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2015-04-01

    Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

  5. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when

  6. Do submarine landslides and turbidites provide a faithful record of large magnitude earthquakes in the Western Mediterranean?

    NASA Astrophysics Data System (ADS)

    Clare, Michael

    2016-04-01

    Large earthquakes and associated tsunamis pose a potential risk to coastal communities. Earthquakes may trigger submarine landslides that mix with surrounding water to produce turbidity currents. Recent studies offshore Algeria have shown that earthquake-triggered turbidity currents can break important communication cables. If large earthquakes reliably trigger landslides and turbidity currents, then their deposits can be used as a long-term record to understand temporal trends in earthquake activity. It is important to understand in which settings this approach can be applied. We provide some suggestions for future Mediterranean palaeoseismic studies, based on learnings from three sites. Two long piston cores from the Balearic Abyssal Plain provide long-term (<150 ka) records of large volume turbidites. The frequency distribution form of turbidite recurrence indicates a constant hazard rate through time and is similar to the Poisson distribution attributed to large earthquake recurrence on a regional basis. Turbidite thickness varies in response to sea level, which is attributed to proximity and availability of sediment. While mean turbidite recurrence is similar to the seismogenic El Asnam fault in Algeria, geochemical analysis reveals not all turbidites were sourced from the Algerian margin. The basin plain record is instead an amalgamation of flows from Algeria, Sardinia, and river fed systems further to the north, many of which were not earthquake-triggered. Thus, such distal basin plain settings are not ideal sites for turbidite palaoeseimology. Boxcores from the eastern Algerian slope reveal a thin silty turbidite dated to ~700 ya. Given its similar appearance across a widespread area and correlative age, the turbidite is inferred to have been earthquake-triggered. More recent earthquakes that have affected the Algerian slope are not recorded, however. Unlike the central and western Algerian slopes, the eastern part lacks canyons and had limited sediment

  7. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  8. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    NASA Astrophysics Data System (ADS)

    Böse, Maren; Graves, Robert W.; Gill, David; Callaghan, Scott; Maechling, Philip J.

    2014-09-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0-10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a `proof of concept', being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (˜20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least `moderate', `strong' or `very strong' shaking in the Los Angeles (LA) basin

  9. Prospectively Evaluating the Collaboratory for the Study of Earthquake Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria

    2016-04-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP

  10. Historical precipitation predictably alters the shape and magnitude of microbial functional response to soil moisture.

    PubMed

    Averill, Colin; Waring, Bonnie G; Hawkes, Christine V

    2016-05-01

    Soil moisture constrains the activity of decomposer soil microorganisms, and in turn the rate at which soil carbon returns to the atmosphere. While increases in soil moisture are generally associated with increased microbial activity, historical climate may constrain current microbial responses to moisture. However, it is not known if variation in the shape and magnitude of microbial functional responses to soil moisture can be predicted from historical climate at regional scales. To address this problem, we measured soil enzyme activity at 12 sites across a broad climate gradient spanning 442-887 mm mean annual precipitation. Measurements were made eight times over 21 months to maximize sampling during different moisture conditions. We then fit saturating functions of enzyme activity to soil moisture and extracted half saturation and maximum activity parameter values from model fits. We found that 50% of the variation in maximum activity parameters across sites could be predicted by 30-year mean annual precipitation, an indicator of historical climate, and that the effect is independent of variation in temperature, soil texture, or soil carbon concentration. Based on this finding, we suggest that variation in the shape and magnitude of soil microbial response to soil moisture due to historical climate may be remarkably predictable at regional scales, and this approach may extend to other systems. If historical contingencies on microbial activities prove to be persistent in the face of environmental change, this approach also provides a framework for incorporating historical climate effects into biogeochemical models simulating future global change scenarios.

  11. Isoseismal map of the 2015 Nepal earthquake and its relationships with ground-motion parameters, distance and magnitude

    NASA Astrophysics Data System (ADS)

    Prajapati, Sanjay K.; Dadhich, Harendra K.; Chopra, Sumer

    2017-01-01

    A devastating earthquake of Mw 7.8 struck central Nepal on 25th April, 2015 (6:11:25 UT) which resulted in more than ∼9000 deaths, and destroyed millions of houses. Standing buildings, roads and electrical installations worth 25-30 billions of dollars are reduced to rubbles. The earthquake was widely felt in the northern parts of India and moderate damage have been observed in the northern part of UP and Bihar region of India. Maximum intensity IX, according to the USGS report, was observed in the meizoseismal zone, surrounding the Kathmandu region. In the present study, we have compiled available information from the print, electronic media and various reports of damages and other effects caused by the event, and interpreted them to obtain Modified Mercalli Intensities (MMI) at over 175 locations spread over Nepal and surrounding Indian and Tibet region. We have also obtained a number of strong motion recordings from India and Nepal seismic network and developed an empirical relationship between the MMI and peak ground acceleration (PGA), peak ground velocity (PGV). We have used least square regression technique to derive the empirical relation between the MMI and ground motion parameters and compared them with the empirical relationships available for other regions of the world. Further, seismic intensity information available for historical earthquakes, which have occurred in the Nepal Himalaya along with the present intensity data has been utilized for developing an attenuation relationship for the studied region using two step regression analyses. The derived attenuation relationship is useful for assessing damage of a potential future large earthquake (earthquake scenario-based planning purposes) in the region.

  12. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  13. Fluid-faulting evolution in high definition: Connecting fault structure and frequency-magnitude variations during the 2014 Long Valley Caldera, California, earthquake swarm

    NASA Astrophysics Data System (ADS)

    Shelly, David R.; Ellsworth, William L.; Hill, David P.

    2016-03-01

    An extended earthquake swarm occurred beneath southeastern Long Valley Caldera between May and November 2014, culminating in three magnitude 3.5 earthquakes and 1145 cataloged events on 26 September alone. The swarm produced the most prolific seismicity in the caldera since a major unrest episode in 1997-1998. To gain insight into the physics controlling swarm evolution, we used large-scale cross correlation between waveforms of cataloged earthquakes and continuous data, producing precise locations for 8494 events, more than 2.5 times the routine catalog. We also estimated magnitudes for 18,634 events (~5.5 times the routine catalog), using a principal component fit to measure waveform amplitudes relative to cataloged events. This expanded and relocated catalog reveals multiple episodes of pronounced hypocenter expansion and migration on a collection of neighboring faults. Given the rapid migration and alignment of hypocenters on narrow faults, we infer that activity was initiated and sustained by an evolving fluid pressure transient with a low-viscosity fluid, likely composed primarily of water and CO2 exsolved from underlying magma. Although both updip and downdip migration were observed within the swarm, downdip activity ceased shortly after activation, while updip activity persisted for weeks at moderate levels. Strongly migrating, single-fault episodes within the larger swarm exhibited a higher proportion of larger earthquakes (lower Gutenberg-Richter b value), which may have been facilitated by fluid pressure confined in two dimensions within the fault zone. In contrast, the later swarm activity occurred on an increasingly diffuse collection of smaller faults, with a much higher b value.

  14. Fluid-faulting evolution in high definition: Connecting fault structure and frequency-magnitude variations during the 2014 Long Valley Caldera, California earthquake swarm

    USGS Publications Warehouse

    Shelly, David R.; Ellsworth, William L.; Hill, David P.

    2016-01-01

    An extended earthquake swarm occurred beneath southeastern Long Valley Caldera between May and November 2014, culminating in three magnitude 3.5 earthquakes and 1145 cataloged events on 26 September alone. The swarm produced the most prolific seismicity in the caldera since a major unrest episode in 1997-1998. To gain insight into the physics controlling swarm evolution, we used large-scale cross-correlation between waveforms of cataloged earthquakes and continuous data, producing precise locations for 8494 events, more than 2.5 times the routine catalog. We also estimated magnitudes for 18,634 events (~5.5 times the routine catalog), using a principal component fit to measure waveform amplitudes relative to cataloged events. This expanded and relocated catalog reveals multiple episodes of pronounced hypocenter expansion and migration on a collection of neighboring faults. Given the rapid migration and alignment of hypocenters on narrow faults, we infer that activity was initiated and sustained by an evolving fluid pressure transient with a low-viscosity fluid, likely composed primarily of water and CO2 exsolved from underlying magma. Although both updip and downdip migration were observed within the swarm, downdip activity ceased shortly after activation, while updip activity persisted for weeks at moderate levels. Strongly migrating, single-fault episodes within the larger swarm exhibited a higher proportion of larger earthquakes (lower Gutenberg-Richter b value), which may have been facilitated by fluid pressure confined in two dimensions within the fault zone. In contrast, the later swarm activity occurred on an increasingly diffuse collection of smaller faults, with a much higher b value.

  15. Ground motion prediction and earthquake scenarios in Italy: a methodological comparison and perspectives of applicability

    NASA Astrophysics Data System (ADS)

    GNDT Group,; Cocco, M.

    2001-12-01

    In this study we report the results of a research project aimed at the development and the comparison of different methodologies for the seismic hazard evaluation in central and southern Apennines (Italy) earthquake prone areas. The project, supported by GNDT-INGV, will concern the design of ground shaking scenarios, based on the identification of the position, geometry and rupture mechanism of active faults and of the crustal velocity structure. Different numerical approaches have been applied to simulate the ground velocity and acceleration observed at the earth surface during moderate and strong earthquakes including complex source and/or path effects. We compare the simulated records obtained using pure stochastic methods and hybrid methods, in which a stochastic component is added to the deterministic, low frequency one. We also adopt pure deterministic methods (such as pseudo-spectral approaches) to evaluate the Green function in complex media with simple sources. This approach is relevant for the Apenninic seismic belt, for which no strong motion data are available and it is struck by large magnitude historical events. In these areas the prediction of ground shaking during large earthquakes by means of synthetic seismograms can represent a useful tool to assess seismic hazard. The proposed methodologies will be tested and calibrated in "training areas", where an adequate knowledge of seismic sources and crustal structure as well as instrumental strong and weak motions data are available. The selected training area is the Colfiorito region (Umbria-Marche), where the 1997-98 seismic sequence (Mw <= 6) took place and an extended seismic data base is available. A systematic and accurate comparison between the ground motion time histories simulated by the different approaches, the fit to the observed waveforms (including weak motions), and the comparison between characteristic ground motion values (peak values, durations, frequency bandwidth, spectral values

  16. Determination of focal mechanisms of intermediate-magnitude earthquakes in Mexico, based on Greens functions calculated for a 3D Earth model

    NASA Astrophysics Data System (ADS)

    Rodrigo Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala

    2015-04-01

    One important ingredient in the study of the complex active tectonics in Mexico is the analysis of earthquake focal mechanisms, or the seismic moment tensor. They can be determined trough the calculation of Green functions and subsequent inversion for moment-tensor parameters. However, this calculation is gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes excite waves of longer periods that interact weakly with laterally heterogeneities in the crust. For these earthquakes, using 1D velocity models to compute the Greens fucntions works well. The opposite occurs for smaller and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle and requires more specific or regional 3D models. In this study, we calculate Greens functions for earthquakes in Mexico using a laterally heterogeneous seismic wave speed model, comprised of mantle model S362ANI (Kustowski et al 2008) and crustal model CRUST 2.0 (Bassin et al 1990). Subsequently, we invert the observed seismograms for the seismic moment tensor using a method developed by Liu et al (2004) an implemented by Óscar de La Vega (2014) for earthquakes in Mexico. By following a brute force approach, in which we include all observed Rayleigh and Love waves of the Mexican National Seismic Network (Servicio Sismológico Naciona, SSN), we obtain reliable focal mechanisms for events that excite a considerable amount of low frequency waves (Mw > 4.8). However, we are not able to consistently estimate focal mechanisms for smaller events using this method, due to high noise levels in many of the records. Excluding the noisy records, or noisy parts of the records manually, requires interactive edition of the data, using an efficient tool for the editing. Therefore, we developed a graphical user interface (GUI), based on python and the python library ObsPy, that allows the edition of observed and

  17. Response facilitation: implications for perceptual theory, psychotherapy, neurophysiology, and earthquake prediction.

    PubMed

    Medici, R G; Frey, A H; Frey, D

    1985-04-01

    There have been numerous naturalistic observations and anecdotal reports of abnormal animal behavior prior to earthquakes. Basic physiological and behavioral data have been brought together with geophysical data to develop a specific explanation to account for how animals could perceive and respond to precursors of impending earthquakes. The behavior predicted provides a reasonable approximation to the reported abnormal behaviors; that is, the behavior appears to be partly reflexive and partly operant. It can best be described as agitated stereotypic behavior. The explanation formulated has substantial implications for perceptual theory, psychotherapy, and neurophysiology, as well as for earthquake prediction. Testable predictions for biology, psychology, and geophysics can be derived from the explanation.

  18. A Test of a Strong Ground Motion Prediction Methodology for the 7 September 1999, Mw=6.0 Athens Earthquake

    SciTech Connect

    Hutchings, L; Ioannidou, E; Voulgaris, N; Kalogeras, I; Savy, J; Foxall, W; Stavrakakis, G

    2004-08-06

    We test a methodology to predict the range of ground-motion hazard for a fixed magnitude earthquake along a specific fault or within a specific source volume, and we demonstrate how to incorporate this into probabilistic seismic hazard analyses (PSHA). We modeled ground motion with empirical Green's functions. We tested our methodology with the 7 September 1999, Mw=6.0 Athens earthquake, we: (1) developed constraints on rupture parameters based on prior knowledge of earthquake rupture processes and sources in the region; (2) generated impulsive point shear source empirical Green's functions by deconvolving out the source contribution of M < 4.0 aftershocks; (3) used aftershocks that occurred throughout the area and not necessarily along the fault to be modeled; (4) ran a sufficient number of scenario earthquakes to span the full variability of ground motion possible; (5) found that our distribution of synthesized ground motions span what actually occurred and their distribution is realistically narrow; (6) determined that one of our source models generates records that match observed time histories well; (7) found that certain combinations of rupture parameters produced ''extreme'' ground motions at some stations; (8) identified that the ''best fitting'' rupture models occurred in the vicinity of 38.05{sup o} N 23.60{sup o} W with center of rupture near 12 km, and near unilateral rupture towards the areas of high damage, and this is consistent with independent investigations; and (9) synthesized strong motion records in high damage areas for which records from the earthquake were not recorded. We then developed a demonstration PSHA for a source region near Athens utilizing synthesized ground motion rather that traditional attenuation. We synthesized 500 earthquakes distributed throughout the source zone likely to have Mw=6.0 earthquakes near Athens. We assumed an average return period of 1000 years for this magnitude earthquake in the particular source zone

  19. Repeated large-magnitude earthquakes in a tectonically active, low-strain continental interior: The northern Tien Shan, Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Landgraf, A.; Dzhumabaeva, A.; Abdrakhmatov, K. E.; Strecker, M. R.; Macaulay, E. A.; Arrowsmith, Jr.; Sudhaus, H.; Preusser, F.; Rugel, G.; Merchel, S.

    2016-05-01

    The northern Tien Shan of Kyrgyzstan and Kazakhstan has been affected by a series of major earthquakes in the late 19th and early 20th centuries. To assess the significance of such a pulse of strain release in a continental interior, it is important to analyze and quantify strain release over multiple time scales. We have undertaken paleoseismological investigations at two geomorphically distinct sites (Panfilovkoe and Rot Front) near the Kyrgyz capital Bishkek. Although located near the historic epicenters, both sites were not affected by these earthquakes. Trenching was accompanied by dating stratigraphy and offset surfaces using luminescence, radiocarbon, and 10Be terrestrial cosmogenic nuclide methods. At Rot Front, trenching of a small scarp did not reveal evidence for surface rupture during the last 5000 years. The scarp rather resembles an extensive debris-flow lobe. At Panfilovkoe, we estimate a Late Pleistocene minimum slip rate of 0.2 ± 0.1 mm/a, averaged over at least two, probably three earthquake cycles. Dip-slip reverse motion along segmented, moderately steep faults resulted in hanging wall collapse scarps during different events. The most recent earthquake occurred around 3.6 ± 1.3 kyr ago (1σ), with dip-slip offsets between 1.2 and 1.4 m. We calculate a probabilistic paleomagnitude to be between 6.7 and 7.2, which is in agreement with regional data from the Kyrgyz range. The morphotectonic signals in the northern Tien Shan are a prime example of deformation in a tectonically active intracontinental mountain belt and as such can help understand the longer-term coevolution of topography and seismogenic processes in similar structural settings worldwide.

  20. Earthquake mechanism and predictability shown by a laboratory fault

    USGS Publications Warehouse

    King, C.-Y.

    1994-01-01

    Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.

  1. Turning the rumor of May 11, 2011 earthquake prediction In Rome, Italy, into an information day on earthquake hazard

    NASA Astrophysics Data System (ADS)

    Amato, A.; Cultrera, G.; Margheriti, L.; Nostro, C.; Selvaggi, G.; INGVterremoti Team

    2011-12-01

    A devastating earthquake had been predicted for May 11, 2011 in Rome. This prediction was never released officially by anyone, but it grew up in the Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions. Indeed, around May 11, 2011, a planetary alignment was really expected and this contributed to give credibility to the earthquake prediction among people. During the previous months, INGV was overwhelmed with requests for information about this supposed prediction by Roman inhabitants and tourists. Given the considerable mediatic impact of this expected earthquake, INGV decided to organize an Open Day in its headquarter in Rome for people who wanted to learn more about the Italian seismicity and the earthquake as natural phenomenon. The Open Day was preceded by a press conference two days before, in which we talked about this prediction, we presented the Open Day, and we had a scientific discussion with journalists about the earthquake prediction and more in general on the real problem of seismic risk in Italy. About 40 journalists from newspapers, local and national tv's, press agencies and web news attended the Press Conference and hundreds of articles appeared in the following days, advertising the 11 May Open Day. The INGV opened to the public all day long (9am - 9pm) with the following program: i) meetings with INGV researchers to discuss scientific issues; ii) visits to the seismic monitoring room, open 24h/7 all year; iii) guided tours through interactive exhibitions on earthquakes and Earth's deep structure; iv) lectures on general topics from the social impact of rumors to seismic risk reduction; v) 13 new videos on channel YouTube.com/INGVterremoti to explain the earthquake process and give updates on various aspects of seismic monitoring in Italy; vi) distribution of books and brochures. Surprisingly, more than 3000 visitors came to visit INGV

  2. On Possibility To Using Deep-wells Geo-observatories For The Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Esipko, O. A.; Rosaev, A. E.

    The problem of earthquake prediction has a significant interest. Taking into account both internal and external factors are necessary. Some publications, attempt to correlate time of seismic events with tides, and show ability of the earthquake prediction, based geophysical fields observations, on are known. In according with our studying earthquake catalogue, most close before Spitak (07.12.1988), significant earthquake was at Caucasus 23.09.1988 in accompaniment Afganistan earthquake 25.09.1988. We had earthquake in Tajikistan after Spitak 22.01.1989 . All thus events take place approximately at similar phase of monthly tide. On the other side, measurements in geo-observatories, based on deep wells, show strong correlation in variations some of geophysical fields and cosmic factors. We study thermal field's variations in Tyrnyaus deep well (North Caucasus) before and after Spitak earthquake. The changes of thermal field, which may be related with catastrophic event were detected. The comparison of according isotherms show, that mean thermal gradient remarkable decrease just before earthquake. The development of monitoring over geothermic fields variations, understanding of their nature, and methods of taking into account seasonal gravitation and electromagnetic variations at the seismic variations detection give us an ability to close for a forecast problem solution. The main conclusions are: 1)Tidal forces are important factor for catastrophic Spitak earthquake generation; 2)Control over geophysical fields variations in well's geo-observatories based in seismic active regions, may allow us to understand the character of change physical parameters before earthquake. It gives ability to develop method of earthquake prediction.

  3. Predicted Liquefaction in the Greater Oakland and Northern Santa Clara Valley Areas for a Repeat of the 1868 Hayward Earthquake

    NASA Astrophysics Data System (ADS)

    Holzer, T. L.; Noce, T. E.; Bennett, M. J.

    2008-12-01

    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by latest Holocene alluvial fan levee deposits where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906. The liquefaction scenario maps were created with ArcGIS ModelBuilder. Peak ground accelerations first were computed with the new Boore and Atkinson NGA attenuation relation (2008, Earthquake Spectra, 24:1, p. 99-138), using VS30 to account for local site response. Spatial liquefaction probabilities were then estimated using the predicted ground motions

  4. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    PubMed

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event.

  5. Stress Concentration Phenomenon Before the 2011 M9.0 Tohoku-Oki Earthquake: its Implication for Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Y. Q.; Xie, F. R.

    2014-12-01

    seismicity pattern, may provide some valuable information on the stages of stress accumulation, and thus may be used for estimation of earthquake risk. KEYWORDSSource Region, Stress Change, 2011 Tohoku Earthquake, Earthquake Prediction.

  6. A Survey Study of Significent Achievements Accomplished By Snon-mainstreamt Seismologists In ¸ Earthquake Monitoring and Prediction Science In China Since 1970

    NASA Astrophysics Data System (ADS)

    Chen, I. W.

    Since 1990, the author, a British U Chinese consultant, has studied and followed the significant achievements accomplished by Snon-mainstreamT seismologists in & cedil;earthquake prediction in China since 1970. The scientific systems used include: (1) Astronomy-seismology: The relativity between special positions of certain planets (especially the moon and another planet) relative to the seismic active areas on the earth and the occurrence time of major damaging earthquakes in these areas on the earth, the relativity between the dates of magnetic storms on the earth caused by so- lar flare on the sun and the occurrence dates of major damaging earthquakes on the earth, as well as certain cycle relativity between the occurrence dates of major his- torical earthquakes occurring in relative areas on the earth. (2) Precursor analysis: With own-developed sensors and instruments, different to conventional seismologi- cal instruments, numerous precursors, abnormality signs, and earthquake imminent signals were recorded. In most cases, these precursors can not be detected by conven- tional seismological sensors/instruments. Through exploratory practice and theoreti- cal studies, various relativity between different characteristics of the precursors, and the occurrence time, epicenter location and magnitude of the developing earthquake were identified and can be calculated. Through approaches quite different to conven- tional methods, successful predictions of quite a large number of earthquakes have been achieved, including earthquakes that occurred in mainland China, Taiwan and Japan. (3) Earthquake imminent affirmative confirmation: With a special instrument, the background of imminent state of earthquakes can be identified, and a universal earthquake imminent signal is further identified. It can be used to confirm if an earlier predicted earthquake is entering its imminent state, if it will definitely occur, or if an earlier prediction can be released. (4) 5km, 7km and

  7. Relief Inversion in the Avrona Playa as Evidence of Large-Magnitude Historical Earthquakes, Southern Arava Valley, Dead Sea Rift

    NASA Astrophysics Data System (ADS)

    Amit, Rivka; Zilberman, Ezra; Porat, Naomi; Enzel, Yehouda

    1999-07-01

    The Arava Valley section of the Dead Sea Transform (DST) in southern Israel is characterized by the absence of seismic activity in recent times. However, paleoseismic analysis of sediments in the Avrona Playa, a pull-apart basin along the DST, reveals that at least six M > 6 tectonic events have affected the Avrona playa in the last 14,000 yr. The recurrence interval of the events is approximately 2000 yr. Trenched normal faults and push-up ridges in the playa show that the upper 2 m of the deformed sedimentary sequence consists of playa deposits with uniform soil development. The deformed sediments and the soil are typical of basins with an endoreic fluvial system. Based on the limiting age of the sequence and the extent of soil development, faulting in the playa, followed by compression and uplift, occurred in the last 1000 yr. This most recent tectonic event displaced the surface by at least 1 m, consistent with a M > 6.5 earthquake. This earthquake changed the morphology of the Avrona Playa from a closed system with internal drainage to an open basin, resulting in relief inversion. The seismic quiescence in the Arava may indicate a seismic gap in this segment of the DST.

  8. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  9. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  10. Selecting optimum groundwater monitoring stations for earthquake observation and prediction

    NASA Astrophysics Data System (ADS)

    Lee, H.; Woo, N. C.

    2011-12-01

    In Korea, the National Groundwater Monitoring Network (NGMN), consisted of a total of 327 stations around the country up to date, has been established and operated to monitor the background level and quality of ground water since 1995. From some of the monitoring wells, we identified abnormal changes in groundwater due to earthquakes. Then, this project was initiated with the following objectives: a) to identify and characterize groundwater changes due to earthquakes from the NGMN wells, and b) to suggest groundwater monitoring wells that can be used as supplementary monitoring stations for present seismic network. To accomplish the objectives, we need to identify previous responding history of each well to the other earthquakes, and the well's hydrogeological setting. Groundwater responses to earthquake events are identified as the direction of water-level movement (rise/fall), the amount of absolute changes, and the time for recovery to the previous level. Then, the distribution of responded wells is analyzed for their locations with GIS tools. Finally, statistical analyses perform to identify the optimum monitoring stations, considering geological features and hydrogeological settings of the stations and the earthquake epicenters. In this presentation, we report the results of up-to-date study as a part of the above-mentioned program.

  11. Positive feedback, memory, and the predictability of earthquakes

    PubMed Central

    Sammis, C. G.; Sornette, D.

    2002-01-01

    We review the “critical point” concept for large earthquakes and enlarge it in the framework of so-called “finite-time singularities.” The singular behavior associated with accelerated seismic release is shown to result from a positive feedback of the seismic activity on its release rate. The most important mechanisms for such positive feedback are presented. We solve analytically a simple model of geometrical positive feedback in which the stress shadow cast by the last large earthquake is progressively fragmented by the increasing tectonic stress. PMID:11875202

  12. Seismicity as a guide to global tectonics and earthquake prediction.

    NASA Technical Reports Server (NTRS)

    Sykes, L. R.

    1972-01-01

    From seismicity studies, evidence is presented for several aspects of plate-tectonic theory, including ideas of sea-floor spreading, transform faulting and underthrusting of the lithosphere in island arcs. Recent advances in seismic instrumentation, the use of computers in earthquake location, and the installation of local networks of instruments are shown to have vastly increased the data available for seismicity studies. It is pointed out that most of the world's earthquakes are located in very narrow zones along active plate margins and are intimately related to global processes in an extremely coherent manner. Important areas of uncertainty calling for further studies are also pointed out.

  13. Unusual Animal Behavior Preceding the 2011 Earthquake off the Pacific Coast of Tohoku, Japan: A Way to Predict the Approach of Large Earthquakes

    PubMed Central

    Yamauchi, Hiroyuki; Uchiyama, Hidehiko; Ohtani, Nobuyo; Ohta, Mitsuaki

    2014-01-01

    Simple Summary Large earthquakes (EQs) cause severe damage to property and people. They occur abruptly, and it is difficult to predict their time, location, and magnitude. However, there are reports of abnormal changes occurring in various natural systems prior to EQs. Unusual animal behaviors (UABs) are important phenomena. These UABs could be useful for predicting EQs, although their reliability has remained uncertain yet. We report on changes in particular animal species preceding a large EQ to improve the research on predicting EQs. Abstract Unusual animal behaviors (UABs) have been observed before large earthquakes (EQs), however, their mechanisms are unclear. While information on UABs has been gathered after many EQs, few studies have focused on the ratio of emerged UABs or specific behaviors prior to EQs. On 11 March 2011, an EQ (Mw 9.0) occurred in Japan, which took about twenty thousand lives together with missing and killed persons. We surveyed UABs of pets preceding this EQ using a questionnaire. Additionally, we explored whether dairy cow milk yields varied before this EQ in particular locations. In the results, 236 of 1,259 dog owners and 115 of 703 cat owners observed UABs in their pets, with restless behavior being the most prominent change in both species. Most UABs occurred within one day of the EQ. The UABs showed a precursory relationship with epicentral distance. Interestingly, cow milk yields in a milking facility within 340 km of the epicenter decreased significantly about one week before the EQ. However, cows in facilities farther away showed no significant decreases. Since both the pets’ behavior and the dairy cows’ milk yields were affected prior to the EQ, with careful observation they could contribute to EQ predictions. PMID:26480033

  14. The Parkfield earthquake prediction of October 1992; the emergency services response

    USGS Publications Warehouse

    Andrews, R.

    1992-01-01

    The science of earthquake prediction is interesting and worthy of support. In many respects the ultimate payoff of earthquake prediction or earthquake forecasting is how the information can be used to enhance public safety and public preparedness. This is a particularly important issue here in California where we have such a high level of seismic risk historically, and currently, as a consequence of activity in 1989 in the San Francisco Bay Area, in Humboldt County in April of this year (1992), and in southern California in the Landers-Big Bear area in late June of this year (1992). We are currently very concerned about the possibility of a major earthquake, one or more, happening close to one of our metropolitan areas. Within that context, the Parkfield experiment becomes very important. 

  15. Confirmed prediction of the 2 August 2007 MW 6.2 Nevelsk earthquake (Sakhalin Island, Russia)

    NASA Astrophysics Data System (ADS)

    Tikhonov, Ivan N.; Kim, Chun U.

    2010-04-01

    This paper presents the case history of an earthquake prediction, which was prepared by seismologists at the Far East Branch of the Russian Academy of Sciences and submitted to the Russian Ministry of Emergency Situations later on. The prediction, described here briefly, was confirmed with the occurrence of the 2 August 2007 MW 6.2 Nevelsk earthquake. The first symptoms of the large seismic event were recognized as early as 1997, within the so-called "seismic gap of the first kind" (Mogi, 1985). This is the area where large earthquakes are possible but have been absent for at least 100 years, as outlined from historical data along the western coasts of the Sakhalin, Hokkaido and Honshu Islands. The symptoms were related to the incipient "seismic gap of the second kind," where shallow earthquakes with М ≥ 3.0 had disappeared. In December 2005, a long-term prediction of an earthquake with МS = 6.6 ± 0.6 was made when a "seismic gap of the second kind" (Mogi, 1985) became evident since the middle of 2003 in an area of 90 by 60 km. This prediction was to a large extent possible due to the local autonomous digital seismic network set up in 2001 in the southern region of Sakhalin Island. The prediction was accompanied by various anomalous phenomena in advance of the actual predicted event of 2 August 2007 that shocked the city of Nevelsk.

  16. From Earthquake Prediction Research to Time-Variable Seismic Hazard Assessment Applications

    NASA Astrophysics Data System (ADS)

    Bormann, Peter

    2011-01-01

    The first part of the paper defines the terms and classifications common in earthquake prediction research and applications. This is followed by short reviews of major earthquake prediction programs initiated since World War II in several countries, for example the former USSR, China, Japan, the United States, and several European countries. It outlines the underlying expectations, concepts, and hypotheses, introduces the technologies and methodologies applied and some of the results obtained, which include both partial successes and failures. Emphasis is laid on discussing the scientific reasons why earthquake prediction research is so difficult and demanding and why the prospects are still so vague, at least as far as short-term and imminent predictions are concerned. However, classical probabilistic seismic hazard assessments, widely applied during the last few decades, have also clearly revealed their limitations. In their simple form, they are time-independent earthquake rupture forecasts based on the assumption of stable long-term recurrence of earthquakes in the seismotectonic areas under consideration. Therefore, during the last decade, earthquake prediction research and pilot applications have focused mainly on the development and rigorous testing of long and medium-term rupture forecast models in which event probabilities are conditioned by the occurrence of previous earthquakes, and on their integration into neo-deterministic approaches for improved time-variable seismic hazard assessment. The latter uses stress-renewal models that are calibrated for variations in the earthquake cycle as assessed on the basis of historical, paleoseismic, and other data, often complemented by multi-scale seismicity models, the use of pattern-recognition algorithms, and site-dependent strong-motion scenario modeling. International partnerships and a global infrastructure for comparative testing have recently been developed, for example the Collaboratory for the Study of

  17. Tsunami hazard warning and risk prediction based on inaccurate earthquake source parameters

    NASA Astrophysics Data System (ADS)

    Goda, K.; Abilova, K.

    2015-12-01

    This study investigates the issues related to underestimation of the earthquake source parameters in the context of tsunami early warning and tsunami risk assessment. The magnitude of a very large event may be underestimated significantly during the early stage of the disaster, resulting in the issuance of incorrect tsunami warnings. Tsunamigenic events in the Tohoku region of Japan, where the 2011 tsunami occurred, are focused on as a case study to illustrate the significance of the problems. The effects of biases in the estimated earthquake magnitude on tsunami loss are investigated using a rigorous probabilistic tsunami loss calculation tool that can be applied to a range of earthquake magnitudes by accounting for uncertainties of earthquake source parameters (e.g. geometry, mean slip, and spatial slip distribution). The quantitative tsunami loss results provide with valuable insights regarding the importance of deriving accurate seismic information as well as the potential biases of the anticipated tsunami consequences. Finally, usefulness of rigorous tsunami risk assessment is discussed in defining critical hazard scenarios based on the potential consequences due to tsunami disasters.

  18. Tsunami hazard warning and risk prediction based on inaccurate earthquake source parameters

    NASA Astrophysics Data System (ADS)

    Goda, Katsuichiro; Abilova, Kamilla

    2016-02-01

    This study investigates the issues related to underestimation of the earthquake source parameters in the context of tsunami early warning and tsunami risk assessment. The magnitude of a very large event may be underestimated significantly during the early stage of the disaster, resulting in the issuance of incorrect tsunami warnings. Tsunamigenic events in the Tohoku region of Japan, where the 2011 tsunami occurred, are focused on as a case study to illustrate the significance of the problems. The effects of biases in the estimated earthquake magnitude on tsunami loss are investigated using a rigorous probabilistic tsunami loss calculation tool that can be applied to a range of earthquake magnitudes by accounting for uncertainties of earthquake source parameters (e.g., geometry, mean slip, and spatial slip distribution). The quantitative tsunami loss results provide valuable insights regarding the importance of deriving accurate seismic information as well as the potential biases of the anticipated tsunami consequences. Finally, the usefulness of rigorous tsunami risk assessment is discussed in defining critical hazard scenarios based on the potential consequences due to tsunami disasters.

  19. Earthquakes

    EPA Pesticide Factsheets

    Information on this page will help you understand environmental dangers related to earthquakes, what you can do to prepare and recover. It will also help you recognize possible environmental hazards and learn what you can do to protect you and your family

  20. Decade Plan (2010-2020) for the Study on Earthquake Predictability: Challenges and Opportunities in China (Invited)

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Liu, G.; Ma, H.; Jiang, C.; Zhou, L.; Shao, Z.; Wu, Y.; Yan, R.; Yan, W.; Li, Y.; Peng, H.

    2009-12-01

    Since the beginning of 2009, the Department for Earthquake Monitoring and Prediction of China Earthquake Administration (CEA) has been organizing the planning for the study on earthquake predictability for the period 2010-2020. Invited by the organizers of this session, this presentation briefly introduces the planning works, the strategies for the planning, and the research priorities proposed in the plan. Lessons and experiences of the 2008 Wenchuan earthquake play an important role in such planning. In China, ‘earthquake forecast/prediction’ is used in a broader sense, from seismic hazard analysis considering very long time scales, to long-term earthquake forecast with decade time scales, further to intermediate-term forecast mainly with annual time scale, and to short and imminent-term earthquake prediction - the ‘earthquake prediction’ traditionally understood, and at last to the estimation of the type of earthquake sequence and the probability of strong aftershocks. ‘Study on earthquake predictability’ in China also has a broader sense, from seismo-tectonics to the physics of earthquakes. Making full use of the present knowledge of earthquake predictability to serve the reduction of earthquake disasters is one of the methodologies of Chinese seismological agency. The concept ‘monitoring and modeling for prediction’ plays an important role in considering the objectives of the planned R&D activities. Since recent years there has been a fast development of observation facilities in China. How to make full use of the observational data produced by these facilities is one of the key issues for the next decade. Chinese continent has different units of seismo-tectonics, with different characteristics of seismicity and different needs from the society for the reduction of earthquake disasters. Deployment of technologies to deal with this tectonic and seismic diversity is another key-issue in the planning. Continental China, where the public has

  1. Development of a Standardized Methodology for the Use of COSI-Corr Sub-Pixel Image Correlation to Determine Surface Deformation Patterns in Large Magnitude Earthquakes.

    NASA Astrophysics Data System (ADS)

    Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.

    2014-12-01

    Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean

  2. Simulation of broadband ground motion including nonlinear soil effects for a magnitude 6.5 earthquake on the Seattle fault, Seattle, Washington

    USGS Publications Warehouse

    Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.

    2002-01-01

    The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.

  3. Comparison of ground motions estimated from prediction equations and from observed damage during the M = 4.6 1983 Liège earthquake (Belgium)

    NASA Astrophysics Data System (ADS)

    García Moreno, D.; Camelbeeck, T.

    2013-08-01

    On 8 November 1983 an earthquake of magnitude 4.6 damaged more than 16 000 buildings in the region of Liège (Belgium). The extraordinary damage produced by this earthquake, considering its moderate magnitude, is extremely well documented, giving the opportunity to compare the consequences of a recent moderate earthquake in a typical old city of Western Europe with scenarios obtained by combining strong ground motions and vulnerability modelling. The present study compares 0.3 s spectral accelerations estimated from ground motion prediction equations typically used in Western Europe with those obtained locally by applying the statistical distribution of damaged masonry buildings to two fragility curves, one derived from the HAZUS programme of FEMA (FEMA, 1999) and another developed for high-vulnerability buildings by Lang and Bachmann (2004), and to a method proposed by Faccioli et al. (1999) relating the seismic vulnerability of buildings to the damage and ground motions. The results of this comparison reveal good agreement between maxima spectral accelerations calculated from these vulnerability and fragility curves and those predicted from attenuation law equations, suggesting peak ground accelerations for the epicentral area of the 1983 earthquake of 0.13-0.20 g (g: gravitational acceleration).

  4. Is Earthquake Prediction Possible from Short-Term Foreshocks?

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Gerassimos; Avlonitis, Markos; Di Fiore, Boris; Minadakis, George

    2015-04-01

    Foreshocks preceding mainshocks in the short-term, ranging from minutes to a few months prior the mainshock, have been known from several decades ago. Understanding the generation mechanisms of foreshocks was supported by seismicity observations and statistics, laboratory experiments, theoretical considerations and simulation results. However, important issues remain open. For example, (1) How foreshocks are defined? (2) Why only some mainshocks are preceded by foreshocks but others do not? (2) Is the mainshock size dependent on some attributes of the foreshock sequence? (3) Is that possible to discriminate foreshocks from other seismicity styles (e.g. swarms, aftershocks)? To approach possible replies to these issues we reviewed about 400 papers, reports, books and other documents referring to foreshocks as well as to relevant laboratory experiments. We found that different foreshock definitions are used by different authors. We found also that the ratio of mainshocks preceded by foreshocks increases with the increase of monitoring capabilities and that foreshock activity is dependent on source mechanical properties and favoured by material heterogeneity. Also, the mainshock size does not depend on the largest foreshock size but rather by the foreshock area. Seismicity statistics may account for an effective discrimination of foreshocks from other seismicity styles since during foreshock activities the seismicity rate increases with the inverse of time and, at the same, the b-value of the G-R relationship as a rule drops significantly. Our literature survey showed that only the last years the seismicity catalogs organized in some well monitored areas are adequately complete to search for foreshock activities. Therefore, we investigated for a set of "good foreshock examples" covering a wide range of mainshock magnitudes from 4.5 to 9 in Japan (Tohoku 2011), S. California, Italy (including L' Aquila 2009) and Greece. The good examples used indicate that foreshocks

  5. Assessing the capability of numerical methods to predict earthquake ground motion: the Euroseistest verification and validation project

    NASA Astrophysics Data System (ADS)

    Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.

    2009-12-01

    During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions

  6. An Update on the Activities of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Liukis, M.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Werner, M. J.; Jordan, T. H.

    2013-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecast experiments. There are now CSEP testing centers in California, New Zealand, Japan, and Europe, and 364 models are under evaluation. In this presentation, we describe how the testing center hosted by the Southern California Earthquake Center (SCEC) has evolved to meet CSEP objectives and share our experiences in operating the center. The SCEC testing center has been operational since September 1, 2007, and currently hosts 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the western Pacific, and a global testing region. We are currently working to reduce testing latency and to develop procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a Department of Homeland Security project to register and test external forecast procedures from experts outside seismology. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss how we apply CSEP infrastructure to geodetic transient detection and the evaluation of ShakeAlert system for earthquake early warning (EEW), and how CSEP procedures are being adopted for intensity prediction and ground motion prediction experiments. cseptesting.org

  7. Geometrical Scaling of the Magnitude Frequency Statistics of Fluid Injection Induced Earthquakes and Implications for Assessment and Mitigation of Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Dinske, C.; Shapiro, S. A.

    2015-12-01

    To study the influence of size and geometry of hydraulically perturbed rock volumes on the magnitude statistics of induced events, we compare b value and seismogenic index estimates derived from different algorithms. First, we use standard Gutenberg-Richter approaches like least square fit and maximum likelihood technique. Second, we apply the lower bound probability fit (Shapiro et al., 2013, JGR, doi:10.1002/jgrb.50264) which takes the finiteness of the perturbed volume into account. The different estimates systematically deviate from each other and the deviations are larger for smaller perturbed rock volumes. It means that the frequency-magnitude distribution is most affected for small injection volume and short injection time resulting in a high apparent b value. In contrast, the specific magnitude value, the quotient of seismogenic index and b value (Shapiro et al., 2013, JGR, doi:10.1002/jgrb.50264), appears to be a unique seismotectonic parameter of a reservoir location. Our results confirm that it is independent of the size of perturbed rock volume. The specific magnitude is hence an indicator of the magnitudes that one can expect for a given injection. Several performance tests to forecast the magnitude frequencies of induced events show that the seismogenic index model provides reliable predictions which confirm its applicability as a forecast tool, particularly, if applied in real-time monitoring. The specific magnitude model can be used to predict an asymptotical upper limit of probable frequency-magnitude distributions of induced events. We also conclude from our analysis that the physical process of pore pressure diffusion for the event triggering and the scaling of their frequency-magnitude distribution by the size of perturbed rock volume well depicts the presented relation between upper bound of maximum seismic moment and injected fluid volume (McGarr, 2014, JGR, doi:10.1002/2013JB010597), particularly, if nonlinear effects in the diffusion process

  8. Can an earthquake prediction and warning system be developed?

    USGS Publications Warehouse

    N.N, Ambraseys

    1990-01-01

    Over the last 20 years, natural disasters have killed nearly 3 million people and disrupted the lives of over 800 million others. In 2 years there were more than 50 serious natural disasters, including landslides in Italy, France, and Colombia; a typhoon in Korea; wildfires in China and the United States; a windstorm in England; grasshopper plagues in Africa's horn and the Sahel; tornadoes in Canada; devastating earthquakes in Soviet Armenia and Tadzhikstand; infestations in Africa; landslides in Brazil; and tornadoes in the United States 

  9. Recent Developments within the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2014-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecast experiments. There are now CSEP testing centers in California, New Zealand, Japan, and Europe, with 430 models under evaluation. In this presentation, we describe how the Southern California Earthquake Center (SCEC) testing center has evolved to meet CSEP objectives and we share our experiences in operating the center. The SCEC testing center has been operational since September 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and a global testing region. We have reduced testing latency, implemented prototype evaluation of M8 forecasts and currently develop procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a Department of Homeland Security project to register and test external forecast procedures from experts outside seismology. Retrospective experiment for the 2010 Darfield earthquake sequence formed an important addition to the CSEP activities where the predictive skills of physics-based and statistical forecasting models are compared. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and the evaluation of ShakeAlert system for earthquake early warning (EEW), and how CSEP procedures are being adopted for intensity prediction and ground motion prediction experiments.

  10. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  11. Bayesian prediction of earthquake network based on space-time influence domain

    NASA Astrophysics Data System (ADS)

    Zhang, Ya; Zhao, Hai; He, Xuan; Pei, Fan-Dong; Li, Guang-Guang

    2016-03-01

    Bayesian networks (BNs) are used to analyze the conditional dependencies among different events, which are expressed by conditional probability. Scientists have already investigated the seismic activities by using BNs. Recently, earthquake network is used as a novel methodology to analyze the relationships among the earthquake events. In this paper, we propose a way to predict earthquake from a new perspective. The BN is constructed after processing, which is derived from the earthquake network based on space-time influence domain. And then, the BN parameters are learnt by using the cases which are designed from the seismic data in the period between 00:00:00 on January 1, 1992 and 00:00:00 on January 1, 2012. At last, predictions are done for the data in the period between 00:00:00 on January 1, 2012 and 00:00:00 on January 1, 2015 combining the BN with the parameters. The results show that the success rate of the prediction including delayed prediction is about 65%. It is also discovered that the predictions for some nodes have high rate of accuracy under investigation.

  12. The Earthquake Prediction Experiment on the Basis of the Jet Stream's Precursor

    NASA Astrophysics Data System (ADS)

    Wu, H. C.; Tikhonov, I. N.

    2014-12-01

    Simultaneous analysis of the jet stream maps and EQ data of M > 6.0 have been made. 58 cases of EQ occurred in 2006-2010 were studied. It has been found that interruption or velocity flow lines cross above an epicenter of EQ take place 1-70 days prior to event. The duration was 6-12 hours. The assumption is that jet stream will go up or down near an epicenter. In 45 cases the distance between epicenters and jet stream's precursor does not exceed 90 km. The forecast during 30 days before the EQ was 66.1 % (Wu and Tikhonov, 2014). This technique has been used to predict the strong EQ and pre-registered on the website (for example, the 23 October 2011, M 7.2 EQ (Turkey); the 20 May 2012, M 6.1 EQ (Italy); the 16 April 2013, M 7.8 EQ (Iran); the 12 November 2013, M 6.6 EQ (Russia); the 03 March 2014, M 6.7 Ryukyu EQ (Japan); the 21 July 2014, M 6.2 Kuril EQ). We obtain satisfactory accuracy of the epicenter location. As well we define the short alarm period. That's the positive aspects of forecast. However, estimates of magnitude contain a big uncertainty. Reference Wu, H.C., Tikhonov, I.N., 2014. Jet streams anomalies as possible short-term precursors of earthquakes with M > 6.0. Research in Geophysics, Special Issue on Earthquake Precursors. Vol. 4. No 1. doi:10.4081/rg.2014.4939. The precursor of M9.0 Japan EQ on 2011/03/11(fig1). A. M6.1 Italy EQ (2012/05/20, 44.80 N, 11.19 E, H = 5.1 km) Prediction: 2012/03/20~2012/04/20 (45.6 N, 10.5 E), M > 5.5(fig2) http://ireport.cnn.com/docs/DOC-764800 B. M7.8 Iran EQ (2013/04/16, 28.11 N, 62.05 E, H = 82.0 km) Prediction: 2013/01/14~2013/02/04 (28.0 N, 61.3 E) M > 6.0(fig3) http://ireport.cnn.com/docs/DOC-910919 C. M6.6 Russia EQ (2013/11/12, 54.68 N, 162.29 E, H = 47.2 km). Prediction: 2013/10/27~2013/11/13 (56.0 N, 162.9 E) M > 5.5 http://ireport.cnn.com/docs/DOC-1053599 D. M6.7 Japan EQ (2014/03/03, 27.41 N, 127.34 E, H = 111.2 km). Prediction: 2013/12/02 ~2014/01/15 (26.7 N, 128.1 E) M > 6.5(fig4) http

  13. Predicted liquefaction of East Bay fills during a repeat of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Blair, J.L.; Noce, T.E.; Bennett, M.J.

    2006-01-01

    Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M ??? 7.8 earthquake on the northern San Andreas Fault. ?? 2006, Earthquake Engineering Research Institute.

  14. How to predict Italy L'Aquila M6.3 earthquake

    NASA Astrophysics Data System (ADS)

    Guo, Guangmeng

    2016-04-01

    According to the satellite cloud anomaly appeared over eastern Italy on 21-23 April 2012, we predicted the M6.0 quake occurred in north Italy successfully. Here checked the satellite images in 2011-2013 in Italy, and 21 cloud anomalies were found. Their possible correlation with earthquakes bigger than M4.7 which located in Italy main fault systems was statistically examined by assuming various lead times. The result shows that when the leading time interval is set to 23≤ΔT≤45 days, 8 of the 10 quakes were preceded by cloud anomalies. Poisson random test shows that AAR (anomaly appearance rate) and EOR (EQ occurrence rate) is much higher than the values by chance. This study proved the relation between cloud anomaly and earthquake in Italy. With this method, we found that L'Aquila earthquake can also be predicted according to cloud anomaly.

  15. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  16. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  17. Resting EEG in Alpha and Beta Bands Predicts Individual Differences in Attentional Blink Magnitude

    ERIC Educational Resources Information Center

    MacLean, Mary H.; Arnell, Karen M.; Cote, Kimberly A.

    2012-01-01

    Accuracy for a second target (T2) is reduced when it is presented within 500 ms of a first target (T1) in a rapid serial visual presentation (RSVP)--an attentional blink (AB). There are reliable individual differences in the magnitude of the AB. Recent evidence has shown that the attentional approach that an individual typically adopts during a…

  18. Ground Motion Prediction of Subduction Earthquakes using the Onshore-Offshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2014-12-01

    Seismic waves produced by earthquakes already caused plenty of damages all around the world and are still a real threat to human beings. To reduce seismic risk associated with future earthquakes, accurate ground motion predictions are required, especially for cities located atop sedimentary basins that can trap and amplify these seismic waves. We focus this study on long-period ground motions produced by subduction earthquakes in Japan which have the potential to damage large-scale structures, such as high-rise buildings, bridges, and oil storage tanks. We extracted the impulse response functions from the ambient seismic field recorded by two stations using one as a virtual source, without any preprocessing. This method allows to recover the reliable phases and relative, rather than absolute, amplitudes. To retrieve corresponding Green's functions, the impulse response amplitudes need to be calibrated using observational records of an earthquake which happened close to the virtual source. We show that Green's functions can be extracted between offshore submarine cable-based sea-bottom seismographic observation systems deployed by JMA located atop subduction zones and on-land NIED/Hi-net stations. In contrast with physics-based simulations, this approach has the great advantage to predict ground motions of moderate earthquakes (Mw ~5) at long-periods in highly populated sedimentary basin without the need of any external information about the velocity structure.

  19. New vertical geodesy. [VLBI measurements for earthquake prediction

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. H.

    1976-01-01

    The paper contains a review of the theoretical difference between orthometric heights and heights labeled geometric which are determined through use of an extraterrestrial frame of reference. The theory is supplemented with examples which portray very long baseline interferometry as a measuring system that will provide estimates of vertical crustal motion which are radically improved in comparison with those obtained from analysis of repeated geodetic levelings. The example of the San Fernando earthquake of 1971 is used to show how much estimates of orthometric and geometric height change might differ. A comment by another author is appended which takes issue with some of the conclusions of this paper. In particular, an attempt is made in the comment to rebut the conclusion that geodetic leveling is less reliable than VLBI measurements for determining relative elevation change of points separated by more than 56 km.

  20. Earthquake prediction in the Soviet Union; an interview with I. L. Nersesov

    USGS Publications Warehouse

    Spall, H.

    1980-01-01

    Dr. I. L. Nersesov is a seismologist with the Institute of Physics of the Earth, Academy of Sciences of the U.S.S.R., Moscow. He is one of the leaders in the Soviet national program of earthquake prediction

  1. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  2. Magnitudes of selected stellar occultation candidates for Pluto and other planets, with new predictions for Mars and Jupiter

    NASA Technical Reports Server (NTRS)

    Sybert, C. B.; Bosh, A. S.; Sauter, L. M.; Elliot, J. L.; Wasserman, L. H.

    1992-01-01

    Occultation predictions for the planets Mars and Jupiter are presented along with BVRI magnitudes of 45 occultation candidates for Mars, Jupiter, Saturn, Uranus, and Pluto. Observers can use these magnitudes to plan observations of occultation events. The optical depth of the Jovian ring can be probed by a nearly central occultation on 1992 July 8. Mars occults an unusually red star in early 1993, and the occultations for Pluto involving the brightest candidates would possibly occur in the spring of 1992 and the fall of 1993.

  3. Strong Motion Prediction Method Using Statistical Green's Function Estimated From K-Net Records and its Application to the Hypothesized Fukuoka Earthquake

    NASA Astrophysics Data System (ADS)

    Kawase, H.; Shigeo Itoh, S.; Kuhara, H.; Matsuo, H.

    2001-12-01

    First we extract statistical characteristics of seismic ground motions from K-Net records observed in the Kyushu region. We select ground motions for earthquakes with shallow depths (<60km) and moderate magnitudes (>4.5), observed within 200km from hypocenters. For the envelope characteristics first we express them by Boore's envelope function (Boore, 1983) and identify its model parameters. Then we express them as a function of the magnitude M and the hypocentral distance X using two step regression analysis. For the spectral characteristics we separate source, path, and site effects from the observed Fourier spectra and express them also as a function of M and X. Once we obtain these statistical parameters, we can synthesize ground motions hypothetically observed at any location of the K-Net sites for arbitrary source. We validate them by comparing them with observed data. Next we use them to predict strong motions for future large earthquakes through the so-called statistical Green's function method. Before to predict ground motions for a hypothesized earthquake we must test our method against the observed ground motions in previous large earthquakes. We first apply the method to the Kagoshima-ken Hokuseibu earthquake with Mjma 6.3 where we observe strong directivity at one K-Net station. Then, we simulate strong motion at the bedrock level during the Hyogo-ken Nanbu earthquake. In either case synthetic waveforms match well with the observed. Thus it is proved that we can predict the ground motions using our statistical Green's function if we properly express the source. Finally, we apply this method to a hypothesized Fukuoka earthquake. First strong motions at the bedrock level are predicted and then the strong motions at the ground surface are obtained by the 1-D wave propagation theory. We assume the same source scenario as in Kobe. The peak ground velocity (PGV) estimated reaches 100 cm/s at most, which is much less than the PGV observed in Kobe, primarily

  4. Research in Seismology: Earthquake Magnitudes

    DTIC Science & Technology

    1975-07-18

    for 1 January 1972 through 30 June 1972 are: no. 21 (Central Italy), 43 (Tibet), ^ (Yugoslavia), ^ (S. SinKiang ), 91 (N... Sinkiang ), 129 (Greece-Bulgaria), ISO (E. Honshu) and 145 (Caspian Sea). This is about 6% of the total number of events

  5. Spectral models for ground motion prediction in the L'Aquila region (central Italy): evidence for stress-drop dependence on magnitude and depth

    NASA Astrophysics Data System (ADS)

    Pacor, F.; Spallarossa, D.; Oth, A.; Luzi, L.; Puglia, R.; Cantore, L.; Mercuri, A.; D'Amico, M.; Bindi, D.

    2016-02-01

    between seismic moment and local magnitude that improves the existing ones and extends the validity range to 3.0-5.8. We find a significant stress drop increase with seismic moment for events with Mw larger than 3.75, with so-called scaling parameter ε close to 1.5. We also observe that the overall offset of the stress-drop scaling is controlled by earthquake depth. We evaluate the performance of the proposed parametric models through the residual analysis of the Fourier spectra in the frequency range 0.5-25 Hz. The results show that the considered stress-drop scaling with magnitude and depth reduces, on average, the standard deviation by 18 per cent with respect to a constant stress-drop model. The overall quality of fit (standard deviation between 0.20 and 0.27, in the frequency range 1-20 Hz) indicates that the spectral model calibrated in this study can be used to predict ground motion in the L'Aquila region.

  6. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  7. Immunologic changes occurring at kindergarten entry predict respiratory illnesses after the Loma Prieta earthquake.

    PubMed

    Boyce, W T; Chesterman, E A; Martin, N; Folkman, S; Cohen, F; Wara, D

    1993-10-01

    Previous studies in adult populations have demonstrated alterations in immune function after psychologically stressful events, and pediatric research has shown significant associations between stress and various childhood morbidities. However, no previous work has examined stress-related immune changes in children and subsequent illness experience. Twenty children were enrolled in a study on immunologic changes after kindergarten entry and their prospective relationship to respiratory illness (RI) experience. Midway through a 12-week RI data collection period, the October 17, 1989 Loma Prieta earthquake occurred. The timing of this event created a natural experiment enabling us to study possible associations between immunologic changes at kindergarten entry, the intensity of earthquake-related stress for children and parents, and changes in RI incidence over the 6 weeks after the earthquake. Immunologic changes were measured using helper (CD4+)-suppressor (CD8+) cell ratios, lymphocyte responses to pokeweed mitogen, and type-specific antibody responses to Pneumovax, in blood sampled 1 week before and 1 week after school entry. RI incidence was assessed using home health diaries and telephone interviews completed every 2 weeks. RIs per child varied from none to six. Six children showed an increase in RI incidence after the earthquake; five experienced a decline. Changes in helper-suppressor cell ratios and pokeweed mitogen response predicted changes in RI incidence in the postearthquake period (r = .43, .46; p < .05). Children showing upregulation of immune parameters at school entry sustained a significant increase in RI incidence after the earthquake.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Toward a Global Model for Predicting Earthquake-Induced Landslides in Near-Real Time

    NASA Astrophysics Data System (ADS)

    Nowicki, M. A.; Wald, D. J.; Hamburger, M. W.; Hearne, M.; Thompson, E.

    2013-12-01

    We present a newly developed statistical model for estimating the distribution of earthquake-triggered landslides in near-real time, which is designed for use in the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) and ShakeCast systems. We use standardized estimates of ground shaking from the USGS ShakeMap Atlas 2.0 to develop an empirical landslide probability model by combining shaking estimates with broadly available landslide susceptibility proxies, including topographic slope, surface geology, and climatic parameters. While the initial model was based on four earthquakes for which digitally mapped landslide inventories and well constrained ShakeMaps are available--the Guatemala (1976), Northridge, California (1994), Chi-Chi, Taiwan (1999), and Wenchuan, China (2008) earthquakes, our improved model includes observations from approximately ten other events from a variety of tectonic and geomorphic settings for which we have obtained landslide inventories. Using logistic regression, this database is used to build a predictive model of the probability of landslide occurrence. We assess the performance of the regression model using statistical goodness-of-fit metrics to determine which combination of the tested landslide proxies provides the optimum prediction of observed landslides while minimizing ';false alarms' in non-landslide zones. Our initial results indicate strong correlations with peak ground acceleration and maximum slope, and weaker correlations with surface geological and soil wetness proxies. In terms of the original four events included, the global model predicts landslides most accurately when applied to the Wenchuan and Chi-Chi events, and less accurately when applied to the Northridge and Guatemala datasets. Combined with near-real time ShakeMaps, the model can be used to make generalized predictions of whether or not landslides are likely to occur (and if so, where) for future earthquakes around the globe, and these estimates

  9. Change in failure stress on the southern san andreas fault system caused by the 1992 magnitude = 7.4 landers earthquake.

    PubMed

    Stein, R S; King, G C; Lin, J

    1992-11-20

    The 28 June Landers earthquake brought the San Andreas fault significantly closer to failure near San Bernardino, a site that has not sustained a large shock since 1812. Stress also increased on the San Jacinto fault near San Bernardino and on the San Andreas fault southeast of Palm Springs. Unless creep or moderate earthquakes relieve these stress changes, the next great earthquake on the southern San Andreas fault is likely to be advanced by one to two decades. In contrast, stress on the San Andreas north of Los Angeles dropped, potentially delaying the next great earthquake there by 2 to 10 years.

  10. Change in failure stress on the southern San Andreas fault system caused by the 1992 magnitude = 7.4 Landers earthquake

    USGS Publications Warehouse

    Stein, R.S.; King, G.C.P.; Lin, J.

    1992-01-01

    The 28 June Landers earthquake brought the San Andreas fault significantly closer to failure near San Bernardino, a site that has not sustained a large shock since 1812. Stress also increased on the San Jacinto fault near San Bernardino and on the San Andreas fault southeast of Palm Springs. Unless creep or moderate earthquakes relieve these stress changes, the next great earthquake on the southern San Andreas fault is likely to be advanced by one to two decades. In contrast, stress on the San Andreas north of Los Angeles dropped, potentially delaying the next great earthquake there by 2 to 10 years.

  11. On the short-term earthquake prediction: renormalization algorithm and observational evidence in S. California, E. Mediterranean, and Japan

    NASA Astrophysics Data System (ADS)

    Keilis-Borok, V.; Shebalin, P.; Zaliapin, I.; Novikova, O.; Gabrielov, A.

    2002-12-01

    Our point of departure is provided by premonitory seismicity patterns found in models and observations. They reflect increase of earthquake correlation range and seismic activity within "intermediate" lead-time of years before a strong earthquake. A combination of these patterns, in renormalized definition, precedes within months eight out of nine strong earthquakes in S. California, E. Mediterranean, and Japan. We suggest on that basis a hypothetical short-term prediction algorithm, to be tested by advance prediction. The algorithm is self-adapting and can be transferred without readaptation from earthquake to earthquake and from area to area. If confirmed, it will have a simple, albeit non-unique, qualitative interpretation. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains not clear, whether it could be used independently.

  12. Test-sites for earthquake prediction experiments within the Colli Albani region

    NASA Astrophysics Data System (ADS)

    Quattrocchi, F.; Calcara, M.

    In this paper we discuss some geochemical data gathered by discrete and continuous monitoring during the 1995-1996 period, carried out for earthquake prediction test-experiments throughout the Colli Albani quiescent volcano, seat of seismicity, selecting some gas discharge sites with peri-volcanic composition. In particular we stressed the results obtained at the continuous geochemical monitoring station (GMS I, BAR site), designed by ING for geochemical surveillance of seismic events. The 12/6/1995 (M=3.6-3.8) Roma earthquake together with the 3/11/1995 (M=3.1) Tivoli earthquake was the most energetic events within the Colli Albani - Roma area, after the beginning of the continuous monitoring (1991) up today: strict correlation between these seismic events and fluid geochemical anomalies in groundwater has been discovered (temperature, Eh, 222Rn, CO 2, NH 3). Separation at depth of a vapour phase, rich in reducing-acidic gases (CO 2, H 2S, etc...), from a hyper-saline brine, within the deep geothermal reservoir is hypothesised to explain the geochemical anomalies: probably the transtensional episodes accompanying the seismic sequences caused an increasing and/or triggering of the phase-separation process and fluid migration, on the regional scale of the Western sector of the Colli Albani, beyond the seismogenic depth (2-4 Km) up to surface. We draw the state of art of GMS II monitoring prototype and the selection criteria of test-sites for earthquake prediction experiments in the Colli Albani region.

  13. Predicted Attenuation Relation and Observed Ground Motion of Gorkha Nepal Earthquake of 25 April 2015

    NASA Astrophysics Data System (ADS)

    Singh, R. P.; Ahmad, R.

    2015-12-01

    A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

  14. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

  15. Physically-based modelling of the competition between surface uplift and erosion caused by earthquakes and earthquake sequences.

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2016-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a

  16. Radon measurements for earthquake prediction along the North Anatolian Fault Zone: a progress report

    USGS Publications Warehouse

    Friedmann, H.; Aric, K.; Gutdeutsch, R.; King, C.-Y.; Altay, C.; Sav, H.

    1988-01-01

    Radon (222Rn) concentration has been continuously measured since 1983 in groundwater at a spring and in subsurface soil gas at five sites along a 200 km segment of the North Anatolian Fault Zone near Bolu, Turkey. The groundwater radon concentration showed a significant increase before the Biga earthquake of magnitude 5.7 on 5 July 1983 at an epicentral distance of 350 km, and a long-term increase between March 1983 and April 1985. The soil-gas radon concentration showed large changes in 1985, apparently not meteorologically induced. The soil-gas and groundwater data at Bolu did not show any obvious correlation. ?? 1988.

  17. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  18. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-02

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  19. Effects of Predictability of Load Magnitude on the Response of the Flexor Digitorum Superficialis to a Sudden Fingers Extension

    PubMed Central

    Aimola, Ettore; Valle, Maria Stella; Casabona, Antonino

    2014-01-01

    Muscle reflexes, evoked by opposing a sudden joint displacement, may be modulated by several factors associated with the features of the mechanical perturbation. We investigated the variations of muscle reflex response in relation to the predictability of load magnitude during a reactive grasping task. Subjects were instructed to flex the fingers 2–5 very quickly after a stretching was exerted by a handle pulled by loads of 750 or 1250 g. Two blocks of trials, one for each load (predictable condition), and one block of trials with a randomized distribution of the loads (unpredictable condition) were performed. Kinematic data were collected by an electrogoniometer attached to the middle phalanx of the digit III while the electromyography of the Flexor Digitorum Superficialis muscle was recorded by surface electrodes. For each trial we measured the kinematics of the finger angular rotation, the latency of muscle response and the level of muscle activation recorded below 50 ms (short-latency reflex), between 50 and 100 ms (long-latency reflex) and between 100 and 140 ms (initial portion of voluntary response) from the movement onset. We found that the latency of the muscle response lengthened from predictable (35.5±1.3 ms for 750 g and 35.5±2.5 ms for 1250 g) to unpredictable condition (43.6±1.3 ms for 750 g and 40.9±2.1 ms for 1250 g) and the level of muscle activation increased with load magnitude. The parallel increasing of muscle activation and load magnitude occurred within the window of the long-latency reflex during the predictable condition, and later, at the earliest portion of the voluntary response, in the unpredictable condition. Therefore, these results indicate that when the amount of an upcoming perturbation is known in advance, the muscle response improves, shortening the latency and modulating the muscle activity in relation to the mechanical demand. PMID:25271638

  20. Holocene behavior of the Brigham City segment: implications for forecasting the next large-magnitude earthquake on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Personius, Stephen F.; DuRoss, Christopher B.; Crone, Anthony J.

    2012-01-01

    The Brigham City segment (BCS), the northernmost Holocene‐active segment of the Wasatch fault zone (WFZ), is considered a likely location for the next big earthquake in northern Utah. We refine the timing of the last four surface‐rupturing (~Mw 7) earthquakes at several sites near Brigham City (BE1, 2430±250; BE2, 3490±180; BE3, 4510±530; and BE4, 5610±650 cal yr B.P.) and calculate mean recurrence intervals (1060–1500  yr) that are greatly exceeded by the elapsed time (~2500  yr) since the most recent surface‐rupturing earthquake (MRE). An additional rupture observed at the Pearsons Canyon site (PC1, 1240±50 cal yr B.P.) near the southern segment boundary is probably spillover rupture from a large earthquake on the adjacent Weber segment. Our seismic moment calculations show that the PC1 rupture reduced accumulated moment on the BCS about 22%, a value that may have been enough to postpone the next large earthquake. However, our calculations suggest that the segment currently has accumulated more than twice the moment accumulated in the three previous earthquake cycles, so we suspect that additional interactions with the adjacent Weber segment contributed to the long elapse time since the MRE on the BCS. Our moment calculations indicate that the next earthquake is not only overdue, but could be larger than the previous four earthquakes. Displacement data show higher rates of latest Quaternary slip (~1.3  mm/yr) along the southern two‐thirds of the segment. The northern third likely has experienced fewer or smaller ruptures, which suggests to us that most earthquakes initiate at the southern segment boundary.

  1. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  2. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  3. [Classification of substances to predict the order of magnitude of their safe water levels in terms of carcinogenic effect].

    PubMed

    Zholdakova, Z I; Kharchevnikova, N V

    2011-01-01

    A classification has been developed to predict the safe water levels of chemical compounds in terms of their carcinogenic effect, by using as the base the LTD@10 value that is a lower 95% confidence limits for the lowest dose that statistically significantly causes a 10% increase in the incidence of cancer in laboratory animals continuously receiving a daily dose of the compound throughout their life, which is given in the CPDB internet resource, and the carcinogenicity classification adopted by the International Agency or Research on Cancer Based on an analysis ofthe maximum allowable concentration (MAC) of the standardized water substances in terms of their carcinogenic effect, the authors determined MA4 C ranges corresponding to different classes in accordance with the proposed classification. They predicted the orders of magnitude of MAC of the standardized water substances without taking into account their carcinogenic effect and those of four substances unstandardized in Russia.

  4. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    NASA Astrophysics Data System (ADS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  5. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  6. The nonlinear predictability of the electrotelluric field variations data analyzed with support vector machines as an earthquake precursor.

    PubMed

    Ifantis, A; Papadimitriou, S

    2003-10-01

    This work investigates the nonlinear predictability of the Electro Telluric Field (ETF) variations data in order to develop new intelligent tools for the difficult task of earthquake prediction. Support Vector Machines trained on a signal window have been used to predict the next sample. We observe a significant increase at this short-term unpredictability of the ETF signal at about two weeks time period before the major earthquakes that took place in regions near the recording devices. The unpredictability increase can be attributed to a quick time variation of the dynamics that produce the ETF signal due to the earthquake generation process. Thus, this increase can be taken into advantage for signaling for an increased possibility of a large earthquake within the next few days in the neighboring region of the recording station.

  7. Earthquake prediction using extinct monogenetic volcanoes: A possible new research strategy

    NASA Astrophysics Data System (ADS)

    Szakács, Alexandru

    2011-04-01

    Volcanoes are extremely effective transmitters of matter, energy and information from the deep Earth towards its surface. Their capacities as information carriers are far to be fully exploited so far. Volcanic conduits can be viewed in general as rod-like or sheet-like vertical features with relatively homogenous composition and structure crosscutting geological structures of far more complexity and compositional heterogeneity. Information-carrying signals such as earthquake precursor signals originating deep below the Earth surface are transmitted with much less loss of information through homogenous vertically extended structures than through the horizontally segmented heterogeneous lithosphere or crust. Volcanic conduits can thus be viewed as upside-down "antennas" or waveguides which can be used as privileged pathways of any possible earthquake precursor signal. In particular, conduits of monogenetic volcanoes are promising transmitters of deep Earth information to be received and decoded at surface monitoring stations because the expected more homogenous nature of their rock-fill as compared to polygenetic volcanoes. Among monogenetic volcanoes those with dominantly effusive activity appear as the best candidates for privileged earthquake monitoring sites. In more details, effusive monogenetic volcanic conduits filled with rocks of primitive parental magma composition indicating direct ascent from sub-lithospheric magma-generating areas are the most suitable. Further selection criteria may include age of the volcanism considered and the presence of mantle xenoliths in surface volcanic products indicating direct and straightforward link between the deep lithospheric mantle and surface through the conduit. Innovative earthquake prediction research strategies can be based and developed on these grounds by considering conduits of selected extinct monogenetic volcanoes and deep trans-crustal fractures as privileged emplacement sites of seismic monitoring stations

  8. Nowcasting earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Grant Ludwig, L.; Luginbuhl, M.; Gong, G.

    2016-11-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(n < n(t)) for the current count n(t) for the small earthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(n < n(t)). EPS is therefore the current level of hazard and assigns a number between 0% and 100% to every region so defined, thus providing a unique measure. Physically, the EPS corresponds to an estimate of the level of progress through the earthquake cycle in the defined region at the current time.

  9. Risk Communication on Earthquake Prediction Studies -"No L'Aquila quake risk" experts probed in Italy in June 2010

    NASA Astrophysics Data System (ADS)

    Oki, S.; Koketsu, K.; Kuwabara, E.; Tomari, J.

    2010-12-01

    For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 earthquake on 30th March, the government held Major Risks Committee which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. At the press conference immediately after the committee, they reported that "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable." 6 days later, a magunitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors opened the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee the previous week. This issue becomes widely known to the seismological society especially after an email titled "Letter of Support for Italian Earthquake Scientists" from seismologists at the National Geophysics and Volcanology Institute (INGV) sent worldwide. It says that the L'Aquila Prosecutors office indicted of manslaughter the members of the Major Risks Committee and that the charges are for failing to provide a short term alarm to the population before the earthquake struck. It is true that there is no generalized method to predict earthquakes but failing the short term alarm is not the reason for the investigation of the scientists. The chief prosecutor stated that "the committee could have provided the people with better advice", and "it wasn't the case that they did not receive any warnings, because there had been tremors". The email also requests sign-on support for the open letter to the president of Italy from Earth sciences colleagues from all over the world and collected more than 5000 signatures

  10. Correlations and Non-predictability in the Time Evolution of Earthquake Ruptures

    NASA Astrophysics Data System (ADS)

    Elkhoury, J. E.; Knopoff, L.

    2007-12-01

    The characterization of the time evolution of ruptures is one of the important aspects of the earthquake process. What makes a rupture, that starts small, to become a big one or end very quickly resulting in a small earthquake is central to understanding the physics of the time evolution of ruptures. Establishing whether there are any correlations in time, between the initiation of the rupture and its ultimate size, is a step in the right direction. Here, we analyze three source-time function data sets. The first is produced by the generation of repeated rupture events on a 2D heterogeneous, in-plane, dynamical model, while the second is produced by an-age dependent critical branching model. The third is the source-time function data base of Ruff [1]. We formulate the problem in terms of two questions. 1) Are there any correlations between the moment release at the beginning of the rupture and the total moment release during the entire rupture? 2) Can we predict the final size of an earthquake, once it has started and without any a posteriori information, by just knowing the moment release up to a certain time τ? Using the three data bases, the answer to the first question is yes and no to the second. The longer τ is, the stronger the correlations are between what goes on at the initiation and the final size. But, for τ fixed, and not a major fraction of the rupture time, there is no predictability of the rupture size. In particular, if a rupture starts with a very large moment release during time τ, it becomes a large earthquake. On the other hand, large earthquakes might start with very small moment release during τ; the non-predictability is due to the heterogeneities. The randomness in the critical branching model mimics the effect of the heterogeneities in the crust and in the 2D model. \\begin{thebibliography}{99} \\bibitem{ruff} Ruff, L. J., http://www.geo.lsa.umich.edu/SeismoObs/STF.html

  11. Southern San Andreas Fault seismicity is consistent with the Gutenberg-Richter magnitude-frequency distribution

    USGS Publications Warehouse

    Page, Morgan T.; Felzer, Karen

    2015-01-01

    The magnitudes of any collection of earthquakes nucleating in a region are generally observed to follow the Gutenberg-Richter (G-R) distribution. On some major faults, however, paleoseismic rates are higher than a G-R extrapolation from the modern rate of small earthquakes would predict. This, along with other observations, led to formulation of the characteristic earthquake hypothesis, which holds that the rate of small to moderate earthquakes is permanently low on large faults relative to the large-earthquake rate (Wesnousky et al., 1983; Schwartz and Coppersmith, 1984). We examine the rate difference between recent small to moderate earthquakes on the southern San Andreas fault (SSAF) and the paleoseismic record, hypothesizing that the discrepancy can be explained as a rate change in time rather than a deviation from G-R statistics. We find that with reasonable assumptions, the rate changes necessary to bring the small and large earthquake rates into alignment agree with the size of rate changes seen in epidemic-type aftershock sequence (ETAS) modeling, where aftershock triggering of large earthquakes drives strong fluctuations in the seismicity rates for earthquakes of all magnitudes. The necessary rate changes are also comparable to rate changes observed for other faults worldwide. These results are consistent with paleoseismic observations of temporally clustered bursts of large earthquakes on the SSAF and the absence of M greater than or equal to 7 earthquakes on the SSAF since 1857.

  12. Development of a New Approach to Earthquake Prediction: Load/Unload Response Ratio (LURR) Theory

    NASA Astrophysics Data System (ADS)

    Yin, X. C.; Wang, Y. C.; Peng, K. Y.; Bai, Y. L.; Wang, H. T.; Yin, X. F.

    The seismogenic process is nonlinear and irreversible so that the response to loading is different from unloading. This difference reflects the damage of a loaded material. Based on this insight, a new parameter-load/unload response ratio (LURR) was proposed to measure quantitatively the proximity to rock failure and earthquake more than ten years ago. In the present paper, we review the fundamental concept of LURR, the validation of LURR with experimental and numerical simulation, the retrospective examination of LURR with new cases in different tectonic settings (California, USA, and Kanto region, Japan), the statistics of earthquake prediction in terms of LURR theory and the random distribution of LURR under Poisson's model. Finally we discuss LURR as a parameter to judge the closeness degree to SOC state of the system and the measurement of tidal triggering earthquake.The Load/Unload Response Ratio (LURR) theory was first proposed in 1984 (Yin, 1987). Subsequently, a series of advances were made (Yin and dYin, 1991; Yin, 1993; Yin et al. 1994a, b, 1995; Maruyama, 1995). In this paper, the new results after 1995 are summarized (Yin et al., 1996; Wang et al., 1998a; Zhuang and Yin, 1999).

  13. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  14. Predicting variations of the least principal stress magnitudes in shale gas reservoirs utilizing variations of viscoplastic properties

    NASA Astrophysics Data System (ADS)

    Sone, H.; Zoback, M. D.

    2013-12-01

    Predicting variations of the magnitude of least principal stress within unconventional reservoirs has significant practical value as these reservoirs require stimulation by hydraulic fracturing. It is common to approach this problem by calculating the horizontal stresses caused by uniaxial gravitational loading using log-derived linear elastic properties of the formation and adding arbitrary tectonic strain (or stress). We propose a new method for estimating stress magnitudes in shale gas reservoirs based on the principles of viscous relaxation and steady-state tectonic loading. Laboratory experiments show that shale gas reservoir rocks exhibit wide range of viscoplastic behavior most dominantly controlled by its composition, whose stress relaxation behavior is described by a simple power-law (in time) rheology. We demonstrate that a reasonable profile of the principal stress magnitudes can be obtained from geophysical logs by utilizing (1) the laboratory power-law constitutive law, (2) a reasonable estimate of the tectonic loading history, and (3) the assumption that stress ratios ([S2-S3]/[S1-S3]) remains constant due to stress relaxation between all principal stresses. Profiles of horizontal stress differences (SHmax-Shmin) generated based on our method for a vertical well in the Barnett shale (Ft. Worth basin, Texas) generally agrees with the occurrence of drilling-induced tensile fractures in the same well. Also, the decrease in the least principal stress (frac gradient) upon entering the limestone formation underlying the Barnett shale appears to explain the downward propagation of the hydraulic fractures observed in the region. Our approach better acknowledges the time-dependent geomechanical effects that could occur over the course of the geological history. The proposed method may prove to be particularly useful for understanding hydraulic fracture containment within targeted reservoirs.

  15. Predicted reversal and recovery of surface creep on the Hayward fault following the 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Schmidt, D. A.; Bürgmann, R.

    2008-10-01

    Offset cultural features suggest that creep rates along the Hayward fault have remained constant since 1920 until the 1989 Loma Prieta earthquake despite evidence in the earthquake record of an enduring stress shadow after 1906. We re-construct the stressing history on the Hayward fault in order to predict when creep, assumed to have slowed, likely resumed at historical rates. The resumption of creep is dependent on the stressing history imposed from postseismic processes. Basic viscoelastic models produce stress histories that allow creep to resume within a couple decades. A detachment zone model for the Bay Area predicts that creep would not resume for 70+ years after the 1906 earthquake, in disagreement with historical creep observations. The recovery of creep is also advanced by potential left-lateral slip that could have been induced by the 1906 earthquake. Calculations for a friction-less fault suggest that 30-210 mm of left-lateral slip could have occurred.

  16. The potential of continuous, local atomic clock measurements for earthquake prediction and volcanology

    NASA Astrophysics Data System (ADS)

    Bondarescu, Mihai; Bondarescu, Ruxandra; Jetzer, Philippe; Lundgren, Andrew

    2015-05-01

    Modern optical atomic clocks along with the optical fiber technology currently being developed can measure the geoid, which is the equipotential surface that extends the mean sea level on continents, to a precision that competes with existing technology. In this proceeding, we point out that atomic clocks have the potential to not only map the sea level surface on continents, but also look at variations of the geoid as a function of time with unprecedented timing resolution. The local time series of the geoid has a plethora of applications. These include potential improvement in the predictions of earthquakes and volcanoes, and closer monitoring of ground uplift in areas where hydraulic fracturing is performed.

  17. Statistical Evaluation of Efficiency and Possibility of Earthquake Predictions with Gravity Field Variation and its Analytic Signal in Western China

    NASA Astrophysics Data System (ADS)

    Chen, Shi; Jiang, Changsheng; Zhuang, Jiancang

    2016-01-01

    This paper aimed at assessing gravity variations as precursors for earthquake prediction in the Tibet (Xizang)-Qinghai-Xinjiang-Sichuan Region, western China. We here take a statistical approach to evaluate efficiency and possibility of earthquake prediction. We used the most recent spatiotemporal gravity field variation datasets of 2002-2008 for the region that were provided by the Crustal Movement Observation Network of China (CMONC). The datasets were space sparse and time discrete. In 2007-2010, 13 earthquakes (> M s 6.0) occurred in the region. The observed gravity variations have a statistical correlation with the occurrence of these earthquakes through the Molchan error diagram tests that lead to alarms over a good fraction of space-time. The results show that the prediction efficiency of amplitude of analytic signal of gravity variations is better than seismicity rate model and THD and absolute value of gravity variation, implying that gravity variations before earthquake may include precursory information of future large earthquakes.

  18. Shaken, not stirred: a serendipitous study of ants and earthquakes.

    PubMed

    Lighton, John R B; Duncan, Frances D

    2005-08-01

    There is anecdotal evidence for profound behavioral changes prior to and during earthquakes in many organisms, including arthropods such as ants. Behavioral or physiological analysis has often, in light of these reports, been proposed as a means of earthquake prediction. We report here a serendipitous study of the effect of the powerful Landers earthquake in the Mojave Desert, USA (Richter magnitude 7.4) on ant trail dynamics and aerobic catabolism in the desert harvester ant Messor pergandei. We monitored trail traffic rates to and from the colony, trail speed, worker mass distributions, rates of aerobic catabolism and temperature at ant height before and during the earthquake, and for 3 days after the earthquake. Contrary to anecdotal reports of earthquake effects on ant behavior, the Landers earthquake had no effect on any measured aspect of the physiology or behavior of M. pergandei. We conclude that anecdotal accounts of the effects of earthquakes or their precursors on insect behavior should be interpreted with caution.

  19. Earthquake Scaling, Simulation and Forecasting

    NASA Astrophysics Data System (ADS)

    Sachs, Michael Karl

    Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tohoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events. In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional

  20. Accounts of damage from historical earthquakes in the northeastern Caribbean to aid in the determination of their location and intensity magnitudes

    USGS Publications Warehouse

    Flores, Claudia H.; ten Brink, Uri S.; Bakun, William H.

    2012-01-01

    Documentation of an event in the past depended on the population and political trends of the island, and the availability of historical documents is limited by the physical resource digitization schedule and by the copyright laws of each archive. Examples of documents accessed are governors' letters, newspapers, and other circulars published within the Caribbean, North America, and Western Europe. Key words were used to search for publications that contain eyewitness accounts of various large earthquakes. Finally, this catalog provides descriptions of damage to buildings used in previous studies for the estimation of moment intensity (MI) and location of significantly damaging or felt earthquakes in Hispaniola and in the northeastern Caribbean, all of which have been described in other studies.

  1. NGA-West2 equations for predicting vertical-component PGA, PGV, and 5%-damped PSA from shallow crustal earthquakes

    USGS Publications Warehouse

    Stewart, Jonathan P.; Boore, David M.; Seyhan, Emel; Atkinson, Gail M.

    2016-01-01

    We present ground motion prediction equations (GMPEs) for computing natural log means and standard deviations of vertical-component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. The functions are similar to those for our horizontal GMPEs. We derive equations for the primary M- and distance-dependence of peak acceleration, peak velocity, and 5%-damped pseudo-spectral accelerations at oscillator periods between 0.01–10 s. We observe pronounced M-dependent geometric spreading and region-dependent anelastic attenuation for high-frequency IMs. We do not observe significant region-dependence in site amplification. Aleatory uncertainty is found to decrease with increasing magnitude; within-event variability is independent of distance. Compared to our horizontal-component GMPEs, attenuation rates are broadly comparable (somewhat slower geometric spreading, faster apparent anelastic attenuation), VS30-scaling is reduced, nonlinear site response is much weaker, within-event variability is comparable, and between-event variability is greater.

  2. Focal mechanisms and moment magnitudes of micro-earthquakes in central Brazil by waveform inversion with quality assessment and inference of the local stress field

    NASA Astrophysics Data System (ADS)

    Carvalho, Juraci; Barros, Lucas Vieira; Zahradník, Jiří

    2016-11-01

    This paper documents an investigation on the use of full waveform inversion to retrieve focal mechanisms of 11 micro-earthquakes (Mw 0.8 to 1.4). The events represent aftershocks of a 5.0 mb earthquake that occurred on October 8, 2010 close to the city of Mara Rosa in the state of Goiás, Brazil. The main contribution of the work lies in demonstrating the feasibility of waveform inversion of such weak events. The inversion was made possible thanks to recordings available at 8 temporary seismic stations in epicentral distances of less than 8 km, at which waveforms can be successfully modeled at relatively high frequencies (1.5-2.0 Hz). On average, the fault-plane solutions obtained are in agreement with a composite focal mechanism previously calculated from first-motion polarities. They also agree with the fault geometry inferred from precise relocation of the Mara Rosa aftershock sequence. The focal mechanisms provide an estimate of the local stress field. This paper serves as a pilot study for similar investigations in intraplate regions where the stress-field investigations are difficult due to rare earthquake occurrences, and where weak events must be studied with a detailed quality assessment.

  3. [Comment on "Exaggerated claims about earthquake predictions: Analysis of NASA's method"] Pattern informatics and cellular seismology: A comparison of methods

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Tiampo, Kristy F.; Klein, William

    2007-06-01

    The recent article in Eos by Kafka and Ebel [2007] is a criticism of a NASA press release issued on 4 October 2004 describing an earthquake forecast (http://quakesim.jpl.nasa.gov/scorecard.html) based on a pattern informatics (PI) method [Rundle et al., 2002]. This 2002 forecast was a map indicating the probable locations of earthquakes having magnitude m>5.0 that would occur over the period of 1 January 2000 to 31 December 2009. Kafka and Ebel [2007] compare the Rundle et al. [2002] forecast to a retrospective analysis using a cellular seismology (CS) method. Here we analyze the performance of the Rundle et al. [2002] forecast using the first 15 of the m>5.0 earthquakes that occurred in the area covered by the forecasts.

  4. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  5. Comment on "The directionality of acoustic T-phase signals from small magnitude submarine earthquakes" [J. Acoust. Soc. Am. 119, 3669-3675 (2006)].

    PubMed

    Bohnenstiehl, Delwayne R

    2007-03-01

    In a recent paper, Chapman and Marrett [J. Acoust. Soc. Am. 119, 3669-3675 (2006)] examined the tertiary (T-) waves associated with three subduction-related earthquakes within the South Fiji Basin. In that paper it is argued that acoustic energy is radiated into the sound channel by downslope propagation along abyssal seamounts and ridges that lie distant to the epicenter. A reexamination of the travel-time constraints indicates that this interpretation is not well supported. Rather, the propagation model that is described would require the high-amplitude T-wave components to be sourced well to the east of the region identified, along a relatively flat-lying seafloor.

  6. Paleoseismic investigations in the Santa Cruz mountains, California: Implications for recurrence of large-magnitude earthquakes on the San Andreas fault

    USGS Publications Warehouse

    Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.

    1998-01-01

    Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that

  7. Generalized Free-Surface Effect and Random Vibration Theory: a new tool for computing moment magnitudes of small earthquakes using borehole data

    NASA Astrophysics Data System (ADS)

    Malagnini, Luca; Dreger, Douglas S.

    2016-07-01

    Although optimal, computing the moment tensor solution is not always a viable option for the calculation of the size of an earthquake, especially for small events (say, below Mw 2.0). Here we show an alternative approach to the calculation of the moment-rate spectra of small earthquakes, and thus of their scalar moments, that uses a network-based calibration of crustal wave propagation. The method works best when applied to a relatively small crustal volume containing both the seismic sources and the recording sites. In this study we present the calibration of the crustal volume monitored by the High-Resolution Seismic Network (HRSN), along the San Andreas Fault (SAF) at Parkfield. After the quantification of the attenuation parameters within the crustal volume under investigation, we proceed to the spectral correction of the observed Fourier amplitude spectra for the 100 largest events in our data set. Multiple estimates of seismic moment for the all events (1811 events total) are obtained by calculating the ratio of rms-averaged spectral quantities based on the peak values of the ground velocity in the time domain, as they are observed in narrowband-filtered time-series. The mathematical operations allowing the described spectral ratios are obtained from Random Vibration Theory (RVT). Due to the optimal conditions of the HRSN, in terms of signal-to-noise ratios, our network-based calibration allows the accurate calculation of seismic moments down to Mw < 0. However, because the HRSN is equipped only with borehole instruments, we define a frequency-dependent Generalized Free-Surface Effect (GFSE), to be used instead of the usual free-surface constant F = 2. Our spectral corrections at Parkfield need a different GFSE for each side of the SAF, which can be quantified by means of the analysis of synthetic seismograms. The importance of the GFSE of borehole instruments increases for decreasing earthquake's size because for smaller earthquakes the bandwidth available

  8. Prediction of maximum earthquake intensities for the San Francisco Bay region

    USGS Publications Warehouse

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  9. Holocene record of slip-predictable earthquakes on the Kenchreai Fault, Gulf of Corinth, Greece

    NASA Astrophysics Data System (ADS)

    Koukouvelas, Ioannis K.; Zygouri, Vasiliki; Papadopoulos, Gerasimos A.; Verroios, Sotiris

    2017-01-01

    We present the Quaternary slip history of the Kenchreai Fault, Gulf of Corinth, based on geomorphological, palaeoseismological, geo-archaeological data and literally determined events. We also applied a series of geomorphic indices such as the hypsometric curve, asymmetry factor, the stream length-gradient index (SL), the valley floor width to valley height ratio (Vf), the drainage basin shape (Bs) and the mountain-front sinuosity (Smf), in drainage basins flowing perpendicular to the fault. These indices are representative for longer time period and are analyzed as follows. Values of SL are relatively high close to the fault trace. Smf values range from 1.01 to 1.85. Vf mean values range between 0.29 and 1.07. Bs values range from 1.16 to 4.78. Lateral fault growth was likely achieved by propagation primarily towards east while its western end appears to act as persistent barrier. The Holocene palaeoseismic history of the fault investigated by a palaeoseismological trench and 14C dating of ten samples suggest four linear morphogenic earthquakes in the last 10 ka. The Kenchreai Fault displays a Holocene slip rate in the order of 0.15 mm a-1 and a recurrence interval ranging between 1300 and 4700 years. Thus the fault is classified as a fault of moderate activity with moderate to well-developed geomorphic evidence of activity and an overall slip-predictable earthquake model.

  10. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  11. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  12. Heart rate and heart rate variability assessment identifies individual differences in fear response magnitudes to earthquake, free fall, and air puff in mice.

    PubMed

    Liu, Jun; Wei, Wei; Kuang, Hui; Tsien, Joe Z; Zhao, Fang

    2014-01-01

    Fear behaviors and fear memories in rodents have been traditionally assessed by the amount of freezing upon the presentation of conditioned cues or unconditioned stimuli. However, many experiences, such as encountering earthquakes or accidental fall from tree branches, may produce long-lasting fear memories but are behaviorally difficult to measure using freezing parameters. Here, we have examined changes in heartbeat interval dynamics as physiological readout for assessing fearful reactions as mice were subjected to sudden air puff, free-fall drop inside a small elevator, and a laboratory-version earthquake. We showed that these fearful events rapidly increased heart rate (HR) with simultaneous reduction of heart rate variability (HRV). Cardiac changes can be further analyzed in details by measuring three distinct phases: namely, the rapid rising phase in HR, the maximum plateau phase during which HRV is greatly decreased, and the recovery phase during which HR gradually recovers to baseline values. We showed that durations of the maximum plateau phase and HR recovery speed were quite sensitive to habituation over repeated trials. Moreover, we have developed the fear resistance index based on specific cardiac response features. We demonstrated that the fear resistance index remained largely consistent across distinct fearful events in a given animal, thereby enabling us to compare and rank individual mouse's fear responsiveness among the group. Therefore, the fear resistance index described here can represent a useful parameter for measuring personality traits or individual differences in stress-susceptibility in both wild-type mice and post-traumatic stress disorder (PTSD) models.

  13. The 26 January 2001 M 7.6 Bhuj, India, earthquake: Observed and predicted ground motions

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.; Bilham, R.; Atkinson, G.M.

    2002-01-01

    Although local and regional instrumental recordings of the devastating 26, January 2001, Bhuj earthquake are sparse, the distribution of macroseismic effects can provide important constraints on the mainshock ground motions. We compiled available news accounts describing damage and other effects and interpreted them to obtain modified Mercalli intensities (MMIs) at >200 locations throughout the Indian subcontinent. These values are then used to map the intensity distribution throughout the subcontinent using a simple mathematical interpolation method. Although preliminary, the maps reveal several interesting features. Within the Kachchh region, the most heavily damaged villages are concentrated toward the western edge of the inferred fault, consistent with western directivity. Significant sediment-induced amplification is also suggested at a number of locations around the Gulf of Kachchh to the south of the epicenter. Away from the Kachchh region, intensities were clearly amplified significantly in areas that are along rivers, within deltas, or on coastal alluvium, such as mudflats and salt pans. In addition, we use fault-rupture parameters inferred from teleseismic data to predict shaking intensity at distances of 0-1000 km. We then convert the predicted hard-rock ground-motion parameters to MMI by using a relationship (derived from Internet-based intensity surveys) that assigns MMI based on the average effects in a region. The predicted MMIs are typically lower by 1-3 units than those estimated from news accounts, although they do predict near-field ground motions of approximately 80%g and potentially damaging ground motions on hard-rock sites to distances of approximately 300 km. For the most part, this discrepancy is consistent with the expected effect of sediment response, but it could also reflect other factors, such as unusually high building vulnerability in the Bhuj region and a tendency for media accounts to focus on the most dramatic damage, rather than

  14. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-04

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes.

  15. A comparison of observed and predicted ground motions from the 2015 MW7.8 Gorkha, Nepal, earthquake

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey S.; Gahalaut, V.; Joshi, A.; Landes, M.; Bossu, R.

    2016-01-01

    We use 21 strong motion recordings from Nepal and India for the 25 April 2015 moment magnitude (MW) 7.8 Gorkha, Nepal, earthquake together with the extensive macroseismic intensity data set presented by Martin et al. (Seism Res Lett 87:957–962, 2015) to analyse the distribution of ground motions at near-field and regional distances. We show that the data are consistent with the instrumental peak ground acceleration (PGA) versus macroseismic intensity relationship developed by Worden et al. (Bull Seism Soc Am 102:204–221, 2012), and use this relationship to estimate peak ground acceleration from intensities (PGAEMS). For nearest-fault distances (RRUP < 200 km), PGAEMS is consistent with the Atkinson and Boore (Bull Seism Soc Am 93:1703–1729, 2003) subduction zone ground motion prediction equation (GMPE). At greater distances (RRUP > 200 km), instrumental PGA values are consistent with this GMPE, while PGAEMS is systematically higher. We suggest the latter reflects a duration effect whereby effects of weak shaking are enhanced by long-duration and/or long-period ground motions from a large event at regional distances. We use PGAEMS values within 200 km to investigate the variability of high-frequency ground motions using the Atkinson and Boore (Bull Seism Soc Am 93:1703–1729, 2003) GMPE as a baseline. Across the near-field region, PGAEMS is higher by a factor of 2.0–2.5 towards the northern, down-dip edge of the rupture compared to the near-field region nearer to the southern, up-dip edge of the rupture. Inferred deamplification in the deepest part of the Kathmandu valley supports the conclusion that former lake-bed sediments experienced a pervasive nonlinear response during the mainshock (Dixit et al. in Seismol Res Lett 86(6):1533–1539, 2015; Rajaure et al. in Tectonophysics, 2016. Ground motions were significantly amplified in the southern Gangetic basin, but were relatively low in the northern basin. The overall distribution of ground motions

  16. Predictability of catastrophic events: Material rupture, earthquakes, turbulence, financial crashes, and human birth

    PubMed Central

    Sornette, Didier

    2002-01-01

    We propose that catastrophic events are “outliers” with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. We describe a unifying approach for modeling and predicting these catastrophic events or “ruptures,” that is, sudden transitions from a quiescent state to a crisis. Such ruptures involve interactions between structures at many different scales. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence, and abrupt changes of weather regimes, financial crashes, and human parturition (birth). Future improvements will involve combining ideas and tools from statistical physics and artificial/computational intelligence, to identify and classify possible universal structures that occur at different scales, and to develop application-specific methodologies to use these structures for prediction of the “crises” known to arise in each application of interest. We live on a planet and in a society with intermittent dynamics rather than a state of equilibrium, and so there is a growing and urgent need to sensitize students and citizens to the importance and impacts of ruptures in their multiple forms. PMID:11875205

  17. Predictability of catastrophic events: material rupture, earthquakes, turbulence, financial crashes, and human birth.

    PubMed

    Sornette, Didier

    2002-02-19

    We propose that catastrophic events are "outliers" with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. We describe a unifying approach for modeling and predicting these catastrophic events or "ruptures," that is, sudden transitions from a quiescent state to a crisis. Such ruptures involve interactions between structures at many different scales. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence, and abrupt changes of weather regimes, financial crashes, and human parturition (birth). Future improvements will involve combining ideas and tools from statistical physics and artificial/computational intelligence, to identify and classify possible universal structures that occur at different scales, and to develop application-specific methodologies to use these structures for prediction of the "crises" known to arise in each application of interest. We live on a planet and in a society with intermittent dynamics rather than a state of equilibrium, and so there is a growing and urgent need to sensitize students and citizens to the importance and impacts of ruptures in their multiple forms.

  18. Ionospheric anomaly due to seismic activities-III: correlation between night time VLF amplitude fluctuations and effective magnitudes of earthquakes in Indian sub-continent

    NASA Astrophysics Data System (ADS)

    Ray, S.; Chakrabarti, S. K.; Mondal, S. K.; Sasmal, S.

    2011-10-01

    We present the results of an analysis of yearlong (2007) monitoring of night time data of the VLF signal amplitude from the Indian Navy station VTX at 18.2 kHz, received by the Indian Centre for Space Physics, Kolkata. We analyzed this data to find out the correlation, if any, between night time amplitude fluctuation and seismic events. We found, analyzing individual cases (with magnitudes >5) as well as statistical analysis (of all the events with effective magnitudes greater than 3.5), that night time fluctuation of the signal amplitude has the highest probability to be beyond the 2_ level about three days prior to seismic events. Thus, the night time fluctuation could be considered as a precursor to enhanced seismic activities.

  19. High-Magnitude (>Mw8.0) Megathrust Earthquakes and the Subduction of Thick Sediment, Tectonic Debris, and Smooth Sea Floor

    NASA Astrophysics Data System (ADS)

    Scholl, D. W.; Kirby, S. H.; von Huene, R.; Ryan, H. F.; Wells, R. E.

    2014-12-01

    INTRODUCTION: Ruff (1989, Pure and Applied Geophysics, v. 129) proposed that thick or excess sediment entering the subduction zone (SZ) smooths and strengthens the trench-parallel distribution of interplate coupling strength. This circumstance was conjectured to favor rupture continuation and the generation interplate thrusts (IPTs) of magnitude >Mw8.2. But, statistically, the correlation of excess sediment and high magnitude IPTs was deemed "less than compelling". NEW OBSERVATIONS: Using a larger and better vetted catalog of instrumental era (1899 through Jan. 2013) IPTs of magnitude Mw7.5 to 9.5 (n=176), and a far more accurate compilation of trench sediment thickness, we tested if, in fact, a compelling correlation exists between the occurrence of great IPTs and where thick (>1.0-1.5 km) vs thin (<1.0-0.5 km) sedimentary sections enter the SZ. Based on the new compilations, a statistically supported statement can be made that great megathrusts are most prone to nucleate at well-sedimented SZs. Despite the shorter (by 7500 km) global length of thick- (vs thin) sediment trenches, ~53% of all instrumental events of magnitude >Mw8.0, ~75% of events >Mw8.5, and 100% of IPTs >Mw9.0 occurred at thick-sediment trenches. No event >Mw9.0 ruptured at thin-sediment trenches, three super giant IPTs (1960 Chile Mw9.5, 1964 Alaska Mw9.2, and 2004 Sumatra Mw9.2) occurred at thick-sediment trenches. Significantly, however, large Mw8.0-9.0 events also commonly (n=23) nucleated at thin-sediment trenches. These IPTs are associated with the subduction of low-relief oceanic crust and where the debris of subduction erosion thickens the subduction channel separating the two plates. INFERENCES: Our new, larger, and corrected date compilations support the conjecture by Ruff (1989) that subduction of a thick section of sediment favors rupture continuation and nucleation of high magnitude Mw8.0 to 9.5 IPTs. This observation can be linked to a causative mechanism of sediment

  20. In-situ fluid-pressure measurements for earthquake prediction: An example from a deep well at Hi Vista, California

    USGS Publications Warehouse

    Healy, J.H.; Urban, T.C.

    1985-01-01

    Short-term earthquake prediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present. ?? 1985 Birkha??user Verlag.

  1. Long Period Ground Motion Prediction Of Linked Tonankai And Nankai Subduction Earthquakes Using 3D Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Kawabe, H.; Kamae, K.

    2005-12-01

    There is high possibility of the occurrence of the Tonankai and Nankai earthquakes which are capable of causing immense damage. During these huge earthquakes, long period ground motions may strike mega-cities Osaka and Nagoya located inside the Osaka and Nobi basins in which there are many long period and low damping structures (such as tall buildings and oil tanks). It is very important for the earthquake disaster m